I highly doubt they are putting LLMs on their little throwaway drones. The US military has actually been working on “let’s figure out what that thing is and blow it up automatically” technology since at least as far back as the 90s; e.g. modern warship defense systems use it to be able to react faster than a human can to blow up an incoming missile.
You are correct. Large language models like ChatGPT are a subset of deep learning, which is a subset of machine learning. Common examples of simple machine learning software are facial recognition, social media algorithms, speech-to-text, and predictive text.
There is no reason to include software as complex, resource intensive, or experimental as an LLM.
I highly doubt they are putting LLMs on their little throwaway drones. The US military has actually been working on “let’s figure out what that thing is and blow it up automatically” technology since at least as far back as the 90s; e.g. modern warship defense systems use it to be able to react faster than a human can to blow up an incoming missile.
Personally I am much more worried about it working exactly as intended.
You are correct. Large language models like ChatGPT are a subset of deep learning, which is a subset of machine learning. Common examples of simple machine learning software are facial recognition, social media algorithms, speech-to-text, and predictive text.
There is no reason to include software as complex, resource intensive, or experimental as an LLM.