The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.
Sus 💀💀💀
Finally, I can have it generate a picture of a flamethrower without it lecturing me like I’m a child making finger guns at school.
My guess is this is being used to spout plausible sounding disinformation.
That would count as harm and be disallowed by the current policy.
But a military application of using GPT to identify and filter misinformation would not be harm, and would have been prevented by the previous policy prohibiting any military use, but would be allowed under the current policy.
Of course, it gets murkier if the military application of identifying misinformation later ends up with a drone strike on the misinformer. In theory they could submit a usage description of “identify misinformation” which appears to do no harm, but then take the identifications to cause harm.
Which is part of why a broad ban on military use may have been more prudent than a ban only on harmful military usage.
Here we go……
If you guys think that AI hasn’t already been in use in various militarys including America y’all are living in lala land.
sigh
Literally no one is reading the article.
The terms still prohibit use to cause harm.
The change is that a general ban on military use has been removed in favor of a generalized ban on harm.
So for example, the Army could use it to do their accounting, but not to generate a disinformation campaign against a hostile nation.
If anyone actually really read the article, we could have a productive conversation around whether any military usage is truly harmless, the nuances of the usefulness of a military ban in a world where so much military labor is outsourced to private corporations which could ‘launder’ terms compliance, or the general inability of terms to preemptively prevent harmful use at all.
Instead, we have people taking the headline only and discussing AI being put in charge of nukes.
Lemmy seems to care a lot more about debating straw men arguments about how terrible AI is than engaging with reality.
welcome to reddit
this about sums up my experience on Lemmy so far.
This is the best summary I could come up with:
OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.
“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.
Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.
The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs.
While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools.
Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”
The original article contains 1,196 words, the summary contains 254 words. Saved 79%. I’m a bot and I’m open source!
Let’s put AI in the control of nukes
Peace Walker has entered the room 👀
we would get nuked immedietely, and not undeservedly
Well how else is it going to learn?
Welp, time to find a cute robot waifu and move to New Asia
Literally the movie “The Creator”
I can’t wait until we find out AI trained on military secrets is leaking military secrets.
I can’t wait until people find out that you don’t even need to train it on secrets, for it to “leak” secrets.
How so?
So while this is obviously bad, did any of you actually think for a moment that this was stopping anything? If the military wants to use ChatGPT, they’re going to find a way whether or not OpenAI likes it. In their minds they may as well get paid for it.
Remember when open ai was a nonprofit first and foremost, and we were supposed to trust they would make AI for good and not evil? Feels like it was only Thanksgiving…
I mean, there was all that drama where the board formed to prevent this from happening kicked out the CEO trying to do this stuff, then the board got booted out and replaced with a new board and brought back that CEO guy. So this was pretty much going to happen.
And some people pointed it out even back then. There were signs that the employees were very loyal to Altmann, but Altmann didn’t meet the security concerns of the board. So stuff like this was just a matter of time.
Effective altruism is just capitalism camoflauge, it’s also just really bad at being camoflauge
It seems to be a trend that any service that claims not to be evil is just waiting for the right moment to drop that pretense.
I wouldnt be too worried they’ve just made an over glorified word predictor and blender of peoples art
AKA the perfect propaganda tool to fuck up elections and make countries collapse into civil war and fascism. Like ours.
Propaganda isn’t new. Sure it’s more widely available now but it’s not new
And that totally justifies having a robot that does it so efficiently it allows people to deepfake shit that’s hard to invalidate, robbing people of their ability to discern what is reality and what is not
Again not new stop grandstanding it as a new effect. Media outlets have been doing this since the dawn of journalism. Scientific process created to combat it, political standards to help reduce it fand laws to make it financially unattractive act remains its not new.
The only thing that is new. The financial gain from the hype of abusing the word AI and thr media not calling it out. But hey here we are back at the start. Its not new.
And that totally makes it okay for you to use an LLM to do so far more effectively and far more efficiently, destroying humanity’s ability to discern reality
Yes because the kinds of people who would fall for a deep fake would never have fallen for propaganda before.
Nope, not deepfakes that convincing.
Keep lying to yourself though. Keep convincing yourself it’s worthwhile to destroy the world you claim to love just so you can keep your shiny new toy. Keep trying to tell yourself it’s not going to harm everyone else around you and that you’re still a good person.
Right all those people eating fucking horse dewormer were perfectly rational before.
Oh noes AI is going to destroy us all.