Tracking a moving object in realtime with video is a standard task for a machine learning engineer. You can do it on an embedded platform with ML hardware support. I don’t know what hardware newer Lancets use but they can already do it according from developer reports from Teegram channels like e.g Разработчик БПЛА.
Honestly, I was just objecting to the use of “AI”. We’ve had both fire and forget and loitering munitions for decades now, neither of which use ML. Will it happen? Sure. For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform, and we dont have computing hardware powerful enough to run ML models that we can jam in a missile.
(Though yeah we run tons of models against drone data feeds, none of those are done onboard…)
For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform
And probably can’t ever be trusted. That “hallucinations can’t ever be ruled out” result is for language models but should probably apply to vision, too. In any case researchers made cars see things and AFAIU they didn’t even have to attack the model they simply confused the radar. Militaries are probably way better at that than anything that’s out in the open, they’ve been doing ECM for ages and of course never tell anyone how any of it works.
That doesn’t mean that ML can’t be used, though, you can have additional non-ML mission parameters such as the drone only acquiring targets over enemy territory. Or that the AI is merely the gunner, there’s still a human commander.
The point of modern deep learning approaches is that they’re extremely easy on the developer skill. Decades ago realtime machine vision needed a machine vision expert, these days you throw the hardware at the problem at learning stage, and embedded devices to run the results are stupidly powerful (doesn’t even take a Jetson board), if you compare to what has been available even a decade ago.
A combination of GPS, or even inertial based guidance to get them to the target area and then some simple vehicle / object identification, I’d think those are possible.
GPS is useful, but not required for operation. Inertial guidance, and ground tracking cameras can easily maintain a good position sense, while completely RF passive. This is also already normal on many toy drones.
You would also want to jam it over a large area. That jamming is akin to a “kick me” sign, in neon lights.
Inertial guidance sucks balls for any meaningful amount of time. Combining it with ground tracking gets it a lot better, if you have good time of flight sensors to measure the distance from the ground. But this also falls flat on its face when the ground is too uniform (grassland, wetland, snow etc).
There are single shot drones, designed to be deployed into a building, or cave system. They then use cameras etc to navigate, while running face recognition. When they find their target, they fly just in front of it. The shaped C4 charge is designed to reduce their head to red mist, while not risking those close by.
AI + cheap drones will completely change warfare. Probably on the same level as the tank, or machine gun.
I’ll try and remember to dig it out later. It was a sales demo from a weapon’s company. I can’t remember exactly which one it was, but the implications scared the shit out of me.
Imagine when they are long range and readily available. Any old despot could load on a rough GPS location and a face. Deniable foreign assassination becomes easy.
I am, in fact, fairly well versed in the topic. You’re 30+ years away from being able to fit hardware powerful enough to run a ML model into a missle, though I cant see a single reason you’d ever want to. Look into the declassified, 40+ year old design paradigms for missiles or other self-guided munitions and it’ll start to give you an idea of why the idea of “AI” guidance is so laughably stupid. There’s so very many reasons we use FPGAs, none of which are compatible with AI.
It’s difficult, but how difficult depends on the task you set. If the task is “maintain manually initiated target lock on a clearly defined object on an empty field, despite the communications link breaking for 10 seconds” -> it is “give a team of coders half a year” difficult. It’s been solved before, the solution just needs re-inventing and porting to a different platform.
If it’s “identify whether an object is military, whether it is frienly or hostile, consider if it’s worth attacking, and attack a camouflaged target in a dense forest”, then it’s currently not worth trying.
It’s one thing detecting a person with machine learning in a test and an actual soldier with camouflage in a very imperfect environment. Also good luck telling friend from foe from civilian.
This has all sorts of problems while making the whole system more complicated and prone to issues. Not the mention moral questions of autonomous weapons. I have no doubt it will happen but not yet, not here.
Onboard AI guidance is not difficult.
You can’t be serious.
Tracking a moving object in realtime with video is a standard task for a machine learning engineer. You can do it on an embedded platform with ML hardware support. I don’t know what hardware newer Lancets use but they can already do it according from developer reports from Teegram channels like e.g Разработчик БПЛА.
Honestly, I was just objecting to the use of “AI”. We’ve had both fire and forget and loitering munitions for decades now, neither of which use ML. Will it happen? Sure. For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform, and we dont have computing hardware powerful enough to run ML models that we can jam in a missile.
(Though yeah we run tons of models against drone data feeds, none of those are done onboard…)
And probably can’t ever be trusted. That “hallucinations can’t ever be ruled out” result is for language models but should probably apply to vision, too. In any case researchers made cars see things and AFAIU they didn’t even have to attack the model they simply confused the radar. Militaries are probably way better at that than anything that’s out in the open, they’ve been doing ECM for ages and of course never tell anyone how any of it works.
That doesn’t mean that ML can’t be used, though, you can have additional non-ML mission parameters such as the drone only acquiring targets over enemy territory. Or that the AI is merely the gunner, there’s still a human commander.
The point of modern deep learning approaches is that they’re extremely easy on the developer skill. Decades ago realtime machine vision needed a machine vision expert, these days you throw the hardware at the problem at learning stage, and embedded devices to run the results are stupidly powerful (doesn’t even take a Jetson board), if you compare to what has been available even a decade ago.
A combination of GPS, or even inertial based guidance to get them to the target area and then some simple vehicle / object identification, I’d think those are possible.
GPS is usually the first thing to be jammed in the battlefield.
GPS is useful, but not required for operation. Inertial guidance, and ground tracking cameras can easily maintain a good position sense, while completely RF passive. This is also already normal on many toy drones.
You would also want to jam it over a large area. That jamming is akin to a “kick me” sign, in neon lights.
Inertial guidance sucks balls for any meaningful amount of time. Combining it with ground tracking gets it a lot better, if you have good time of flight sensors to measure the distance from the ground. But this also falls flat on its face when the ground is too uniform (grassland, wetland, snow etc).
The US already has them.
There are single shot drones, designed to be deployed into a building, or cave system. They then use cameras etc to navigate, while running face recognition. When they find their target, they fly just in front of it. The shaped C4 charge is designed to reduce their head to red mist, while not risking those close by.
AI + cheap drones will completely change warfare. Probably on the same level as the tank, or machine gun.
Do you have a source for this? These sound uncannily close to the Slaughterbots short film…
I’ll try and remember to dig it out later. It was a sales demo from a weapon’s company. I can’t remember exactly which one it was, but the implications scared the shit out of me.
Are you certain you’re not thinking of the Sci-Fi Short Film “Slaughterbots”? The plot is almost exactly what you describe.
Apparently I’m an idiot. I saw a modified version, mixed in amongst other legit videos.
Here is an alternative Piped link(s):
Sci-Fi Short Film “Slaughterbots”
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Imagine when they are long range and readily available. Any old despot could load on a rough GPS location and a face. Deniable foreign assassination becomes easy.
You can’t be very knowledgeable on the topic.
I am, in fact, fairly well versed in the topic. You’re 30+ years away from being able to fit hardware powerful enough to run a ML model into a missle, though I cant see a single reason you’d ever want to. Look into the declassified, 40+ year old design paradigms for missiles or other self-guided munitions and it’ll start to give you an idea of why the idea of “AI” guidance is so laughably stupid. There’s so very many reasons we use FPGAs, none of which are compatible with AI.
Yes it is.
Realtime person detection and following it with a drone? Difficult for me, certainly, but there are enough people out there who have done it.
Both of you are right.
It’s difficult, but how difficult depends on the task you set. If the task is “maintain manually initiated target lock on a clearly defined object on an empty field, despite the communications link breaking for 10 seconds” -> it is “give a team of coders half a year” difficult. It’s been solved before, the solution just needs re-inventing and porting to a different platform.
If it’s “identify whether an object is military, whether it is frienly or hostile, consider if it’s worth attacking, and attack a camouflaged target in a dense forest”, then it’s currently not worth trying.
That was a good guess but unfortunately it is just difficult even in the scenario you proposed
I will be very suprised if this isn’t already happening.
It’s one thing detecting a person with machine learning in a test and an actual soldier with camouflage in a very imperfect environment. Also good luck telling friend from foe from civilian.
This has all sorts of problems while making the whole system more complicated and prone to issues. Not the mention moral questions of autonomous weapons. I have no doubt it will happen but not yet, not here.
I know it already does, at least in newer Lancets. Expect this in fpv type devices soon.