• barsoap@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform

    And probably can’t ever be trusted. That “hallucinations can’t ever be ruled out” result is for language models but should probably apply to vision, too. In any case researchers made cars see things and AFAIU they didn’t even have to attack the model they simply confused the radar. Militaries are probably way better at that than anything that’s out in the open, they’ve been doing ECM for ages and of course never tell anyone how any of it works.

    That doesn’t mean that ML can’t be used, though, you can have additional non-ML mission parameters such as the drone only acquiring targets over enemy territory. Or that the AI is merely the gunner, there’s still a human commander.