For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform
And probably can’t ever be trusted. That “hallucinations can’t ever be ruled out” result is for language models but should probably apply to vision, too. In any case researchers didn’t have much trouble making cars see things.
That doesn’t mean that ML can’t be used, though, you can have additional non-ML mission parameters such as the drone only acquiring targets over enemy territory. Or that the AI is merely the gunner, there’s still a human commander.
eleitl@lemmy.ml 9 months ago
The point of modern deep learning approaches is that they’re extremely easy on the developer skill. Decades ago realtime machine vision needed a machine vision expert, these days you throw the hardware at the problem at learning stage, and embedded devices to run the results are stupidly powerful (doesn’t even take a Jetson board), if you compare to what has been available even a decade ago.