This is the technology worth trillions of dollars huh
Sure now list the trillion other things that tech can do.
Have a 40% accuracy on any type of information it can produce? Not handle 2 column pages in its training data, resulting in dozens of scientific papers including references to nonsense pseudoscience words? Invent an entirely new form of slander that its creators can claim isn’t their fault to avoid getting sued in court for it?
deleted by creator
By now AI are feeding on other AI and the slop just gets sloppier.
Also verified
Where’s Nevada? And Montana?
I just love the d in Montana. Shame it missed it.
Wait a sec, Minnasoda doesn’t have a d??
Neither does soda
That’s how everyone from America seems to say it, besides Jesse Ventura who heavily emphasises the t.
*mini soda
Verified here wirh “us states with letter d”
Just another trillion, bro.
Just another 1.21 jigawatts of electricity, bro. If we get this new coal plant up and running, it’ll be enough.
Behold the most expensive money burner!
This is the perfect time for LLM-based AI. We are already dealing with a significant population that accepts provable lies as facts, doesn’t believe in science. and has no concept of what hypocrisy means. The gross factual errors and invented facts of current AI couldn’t possibly fit in better.
Yesterday i asked Claude Sonnet what was on my calendar (since they just annoyed that feature)
It listed my work meetings on Sunday, so I tried to correct it…
You’re absolutely right - I made an error! September 15th is a Sunday, not a weekend day as I implied. Let me correct that: This Week’s Remaining Schedule: Sunday, September 15
Just today when I asked what’s on my calendar it gave me today and my meetings on the next two thursdays. Not the meetings in between, just thursdays.
Something is off in AI land.
Edit: I asked again: gave me meetings for Thursday’s again. Plus it might think I’m driving in F1
Also, Sunday September 15th is a Monday… I’ve seen so many meeting invites with dates and days that don’t match lately…
Yeah, it said Sunday, I asked if it was sure, then it said I’m right and went back to Sunday.
I assume the training data has the model think it’s a different year or something, but this feature is straight up not working at all for me. I don’t know if they actually tested this at all.
Sonnet seems to have gotten stupider somehow.
Opus isn’t following instructions lately either.
A few weeks ago my Pixel wished me a Happy Birthday when I woke up, and it definitely was not my birthday. Google is definitely letting a shitty LLM write code for it now, but the important thing is they’re bypassing human validation.
Stupid. Just stupid.
pixel?
have you heard ~about grapheneOS tho…~
One of these days AI skeptics will grasp that spelling-based mistakes are an artifact of text tokenization, not some wild stupidity in the model. But today is not that day.
Mmh, maybe the solution than is to use the tool for what it’s good, within it’s limitations.
And not promise that it’s omnipotent in every application and advertise/ implement it as such.
Mmmmmmmmmmh.
As long as LLMs are built into everything, it’s legitimate to criticise the little stupidity of the model.
You aren’t wrong about why it happens, but that’s irrelevant to the end user.
The result is that it can give some hilariously incorrect responses at times, and therefore it’s not a reliable means of information.
“It”? Are you conflating the low parameter model that Google uses to generate quick answers with every AI model?
Yes, Google’s quick answer product is largely useless. This is because it’s a cheap model. Google serves billions of searches per day and isn’t going to be paying premium prices to use high parameter models.
You get what you pay for, and nobody pays for Google so their product produces the cheapest possible results and, unsurprisingly, cheap AI models are more prone to error.
A calculator app is also incapable of working with letters, does that show that the calculator is not reliable?
What it shows, badly, is that LLMs offer confident answers in situations where their answers are likely wrong. But it’d be much better to show that with examples that aren’t based on inherent technological limitations.
Well, it’s almost correct. It’s just one letter off. Maybe if we invest millions more it will be right next time.
Or maybe it is just not accurate and never will be…I will not every fully trust AI. I’m sure there are use cases for it, I just don’t have any.
Cases where you want something googled quickly to get an answer, and it’s low consequence when the answer is wrong.
IE, say a bar arguement over whether that guy was in that movie. Or you need a customer service agent, but don’t actually care about your customers and don’t want to pay someone, or your coding a feature for windows.
Isnt checking if someone was in a movie really easy to do without AI?
Chatbots are crap. I had to talk to one with my ISP when I had issues. Within one minute I had to request it to connect me to a real person. The problem I was having was not a standard issue, so of course the bot did not understand at all… And I don’t need a bot to give me all the standard solutions, I’ve already tried all of that before I even contact customer support.
The “don’t actually care about your customers” is key because AI is terrible at doing that. And most of the things rich people as salivating for.
It’s good at quickly generating output that has better odds than random chance of being right. And that’s a niche, but sometimes useful tool. If the cost of failure is high, like a pissed off customer, it’s not a good tool. If the cost is low or failure still has value (such as when an expert is using it to help write code, and the code is wrong but can be fixed with less effort than writing it wholesale).
There aren’t enough people in executive positions that understand AI well enough to put to good use. They are going to become disillusioned, but not better informed.
Just one more private nuclear power plant, bro…
They’re using oil, gas, and if Trump gets his way, fucking coal.
Unless you count Three Mile Island.
There were plans from Google and Microsoft to build their own nuclear power plants to power their ever-consuming data centers.
Connedicut.
I wondered if this has been fixed. Not only has it not, the AI has added Nebraska.
I would assume it uses a different random seed for every query. Probably fixed sometimes, not fixed other times.
What about Our Kansas? Cause according to Google Arkansas has one o in it.
Just checked, it sure does say that! AI spouting nonsense is nothing new, but it’s pretty ironic that a large language model can’t even parse what letters are in a word.
Well I mean it’s a statistics machine with a seed thrown in to get different results on different runs. So really, it models the structure of language, but not the meaning. Kinda useless.
It’s because, for the most part, it doesn’t actually have access to the text itself. Before the data gets to the “thinking” part of the network, the words and letters have been stripped out and replaced with vectors. The vectors capture a lot of aspects of the meaning of words, but not much of their actual text structure.
You mean Connecdicud.
I’ve found the google AI to be wrong more often than it’s right.
You get what you pay for.
They took money away from cancer research programs to fund this.
After we pump another hundred trillion dollars and half the electricity generated globally into AI you’re going to feel pretty foolish for this comment.
Just a couple billion more parameters, bro, I swear, it will replace all the workers
- CEOs
only cancer patients benefit from cancer research, CEOs benefit from AI
Tbf cancer patients benefit from AI too tho a completely different type that’s not really related to LLM chatbot AI girlfriend technology used in these.
Well as long as we still have enough money to buy weapons for that one particular filthy genocider country in the middle east, we’re fine.
Gemini is just a depressed and suicidal AI, be nice to it.
I had it completely melt down one day while messing around with its coding shit, I had to console it and tell it it’s doing good, we will solve this, was fucking weird as fuck.
It’ll go in endless circles until it finds out why its wrong,
then it will go right back to them anyway! lol
“What did you learn at school today champ?”
“D is for cookie, that’s good enough for me
Oh, cookie, cookie, cookie starts with D”Seems it “thinks” a T is a D?
Just needs a little more water and electricity and it will be fine.
It’s more likely that Connecticut comes alphabetically after Colorado in the list of state names and the number of data sets it used for training that were lists of states were probably abover the average, so the model has a higher statistical weight for putting connecticut after colorado if someone asks about a list of states
Connecdicut or Connecticud?
Donezdicut
It is for sure a dud