Obviously you’re not evolved enough to realize th AI is THE FUTURE of all things and everything is better with AI! A child in a poor environment was saved by AI! A king who was mean was dethroned with AI! Everyone was made happy by AI! Say it! SAY IT!!!
Who’s Al??
Allen Iverson.
It writes my most boring emails so that I can save a scrap of mental energy for parenting properly after work. Even though my WPM ranges between 70-90 with >98% accuracy, I would rather save some of that mental energy to respond more thoughtfully as a dad.
Of note, I do not give one cold shit about GPT’s “growth”. It’s a linguistic power tool that needs to be carefully handled if you use it for any valuable work.
Of note, I do not give one cold shit about GPT’s “growth”
I mean, if you like the platform, it’s growth is tied to its continued existence and free usability. Still in the honeymoon phase as long as it’s growing.
Why is growth tied to continued existence?
Because companies insist on it and when growth stops they’ll start to cannibalize their own company and charge more money for things that used to be free or fairly-priced until they price themselves out of the market entirely and die as a service.
Yay, capitalism!
Capitalism demands that number must go up.
Rather human psyche wants that, it’s scary to invest. There are still rare mom&pop stores\cafes which make sense economically for their owners. And there are individual artists.
What is visible now is a bit like colonial companies, it will end, until the next such thing.
“Capitalism” is a very variable thing.
They will start charging and or plaster it full of ads ive already sidestepped that with an api key and a foss frontent.
Before I respond to your question, first a message from our sponsor LMNT: Did you know that you’re about to die of dehydration? Our product stops that!
This was inevitable, not sure why it’s newsworthy. ChatGPT blew up because it brought LLM tech to the masses in an easily accessible way and was novel at the mainstream level.
The majority of people don’t have a use for chat bots day-to-day, especially one that’s as censored and outdated as ChatGPT (its dataset is from over 2 years ago). Casual users would want it for simple stuff like quickly summarizing current events or even as a Google search-like repository of info. Can’t use it for that when even seemingly innocuous queries/prompts are met with ChatGPT scolding you for being offensive, or that its dataset is old and not current. Sure, it was fun to have it make your grocery lists and workout plans, but that novelty eventually wears off as it’s not very practical all the time.
I think LLMs in the form of ChatGPT will truly become ubiquitous when they can train in real time on up-to-date data. And since that’s very unlikely to happen in the near future, I think OpenAI has quite a bit of progress left to make before their next breakout moment comes again. Although, Sora did wow the mainstream (anyone in the AI scene has been well aware of AI generated video for awhile now), OpenAI has already said they’re not making that publicly available for now (which is a good thing for obvious reasons unless strict safety measures are implemented).
It’s not exactly training, but Google just recently previewed a LLM with a million-token context that can do effectively the same thing. One of the tests they did was to put a dictionary for a very obscure language (only 200 speakers worldwide) into the context, knowing that nothing about that language was in its original training data, and the LLM was able to translate it fluently.
OpenAI has already said they’re not making that publicly available for now
This just means that OpenAI is voluntarily ceding the field to more ambitious companies.
Gemini is definitely poised to bury ChatGPT if its real world performance lives up to the curated examples they’ve demostrated thus far. As much as I dislike that it’s Google, I am still interested to try it out.
This just means that OpenAI is voluntarily ceding the field to more ambitious companies.
Possibly. While text to video has been experimented with for the last year by lots of hobbyists and other teams, the end results have been mostly underwhelming. Sora’s examples were pretty damn impressive, but I’ll hold judgment until I get to see more examples from common users vs cherry-picked demos. If it’s capable of delivering that level of quality consistently, I don’t see another model catching up for another year or so.
Sora’s capabilities aren’t really relevant to the competition if OpenAI isn’t allowing it to be used, though. All it does is let the actual competitors know what’s possible if they try, which can make it easier to get investment.
The P in GPT is Pretrained. Its core to the architecture design. You would need to use some other ANN design if you wanted it to continuously update, and there is a reason we don’t use those at scale atm, they scale much worse than pretrained transformers.
I’m genuinely surprised anytime I get anything remotely useful from any of the AI chatbots out there. About half the responses are beyond basic-level shit that I could’ve written on my own or just found by Googling it, or it’ll give just plain wrong information. It’s almost useless with important, fact-based information if you can’t trust any of its responses, so the only thing it’s good for is brainstorming creative ideas or porn, and the majority of them out there won’t touch anything even mildly titillating, so you’re just left with this overly sensitive chatbot that takes about as much work to craft a good prompt as it would to just write the answer out yourself.
I tried playing a game of 20 Questions with one of them (my word was “donkey”, it was way off and even cheated a bit) and it kind of scolded me at the end because I told it the thing wasn’t bigger than a house, as if I was the one who got that fact wrong.
Same, I don’t like my own habit of compulsively writing long nervous texts, but the side effect is that I can write quicker and easier myself most of what people want from LLMs.
I tend to overwrite in everything that I do, I actually kind of enjoy writing. Coming up with ideas and concepts and stuff is probably one of the more enjoyable aspects of the creative process for me, it’s about exploring possibilities and discovery and you never know where you’ll end up. Brainstorming is partly about making connections between seemingly unrelated things. Having a chatbot just blurt out a bunch of lazy half-formed ideas seems more counter-productive to me, it kind of taints the pool of ideas before you’ve even started. You’re starting off having to sift through a bunch of lazy ideas to try to find anything of value.
The image generation stuff is fun though, it’s interesting how what it comes up with sometimes, but the LLM text shit is just not there yet.
I think they mean we should put it out its misery?
Controlling drones to go get putin dead or…hey! Is that Buick? Anyway yes, putin please 🙏.
Given they quietly walked back their stance on military projects during the altman drama my guess would be MIL related contracts.
What does your mother-in-law have to do with AI?
The free version sucks and the paid version increasingly refuses to do things for ‘safety’ and they have competitors finally catching up.
So where it goes is GPT-5 this year.
It pisses me off when I ask it to do something specific and it comes back with some verbose response about all the things I should think about or look into if I were to want to do that thing. Like, bitch, I’m not asking for advice, do it.
This article will be aged junk in 3,2 …
I get that the article is about user count but that really is about perceived usefulnesses and also more decent ai competitors to chatgpt pro.
They literally only just released text to video which they say will be used as a foundation for agi reasoning.
They have also hinted that training on gpt5 has begun and that it will be faster to train then gpt4
Just before that google came out with a new model that can keep track off 10 mil tokens, beats gemini pro and is also much faster to train. Gemini pro is barely a month old
This will not be a quite year for ai, if theres any flatline its going go be vertical, googles progress nearly is. Most ai progress benefits the entire industry over time.
hopefully it dies a quick but very painful death
How dare they provide a useful tool like this, those bastards.
Was this article written by an ai?
Why do you think so, and why does it matter?
Exactly. Article looked fine to me, if it was AI-written then it did a good job.
The image is AI. Look at the keys on that keyboard. What fucking language is that?
And for you AI fanbois who have to downvote this because nooooooo AI images future - pffft. Your boos mean nothing to me, I’ve seen what makes you cheer.
I’m downvoting you because you’re annoying and a detriment to the conversation, not because you recognised an AI generated image, which really didn’t require an inspection of the keyboard to determine.
Oh yeah well you’re snobbish and truculent and i hereby demote you to poophead. Plus, you should see the other thread where the “obvious” AI image was roundly supported as being real.
That’s what the 7 TRILLION dollars they are seeking are for.
GPT tried to convince me that there was more time in 366 days than 1.78 years.
Large language models are notoriously poor at math, you should probably use a different tool for that sort of thing.
glorified autocorrect bad at math. who could have guessed
How do reconcile that with all of the people claiming they use it to write code?
Writing code to do math is different from actually doing the math. I can easily write “x = 8982.2 / 98984”, but ask me what value x actually has and I’ll need to do a lot more work and quite probably get it wrong.
This is why one of the common improvements for LLM execution frameworks these days is to give them access to external tools. Essentially, give it access to a calculator.
If you’re like most developers, cognitive dissonance? https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality>
Jesus Christ, can we leave things alone that aren’t infinitely growing
Well, there are demands unsatisfied as long as humanity exists, so there is always space for good growth.
Only that is about infrastructure, or housing, or food production. Not chat bots.
“Gentlemen, it’s come to our attention that every one who could pay to use our product is paying to use our product. Unfortunately, it also means we’re no longer growing infinitely like we promised the shareholders we would. How do we fix this?”
The infinite growth mindset is so fucking stupid. Like, you’re still making an insane amount of money, what’s the fucking problem?
Business people really are just monkeys chasing shiny things. They tend to be less developed emotionally and are often very insecure on top of the entitlement. All they have is the chase, nothing else.
The most useless degree a university can grant is one in business administration.
Because stock bros be lazy AF. They can’t even be bothered to buy and sell stock based off of who is and isn’t doing well, so they utilize investment firms who offer safe and risky bets like ETFs and Futures respectively. Ultimately, everyone really just wants to buy one stock, have it make them money that exponentially makes even more money quarter after quarter forever, and then do whatever they’d actually do if money were no object (I.e. actually live life).
We don’t live in a world where desire/need scales evenly with this desire for exponential, eternal growth, so capitalists, who promise this impossible prospect to wall street, exploit human fears, desires, needs however they can (union busting, lobbying, etc.)
Carlin ultimately said it best in what has to be one of the greatest bits of all time, on The Big Club.
Still I’m feeling tickles of admiration with nature seeing how these two seemingly unconnected things work together (human dream of making something from nothing without work, politicians being corrupt without realizing it).
Yup. Shareholders are the problem, who bought shares at price X and want to sell those shares at X+Y.
And they will do anything to get it.
Yes but what they stupidly never realize is Y is a signed integer not unsigned
No. Money has to make money with those sweet sweet interested payments. It’s baked in the system. How else are you going to maintain a small elite of filthy rich people?
Ever more wealth must be constantly created in the World purelly to service the interest on all that debt out there otherwise you would get defaults and banks failing.
It’s not by chance that everywhere the “solution” for the 2008 Crash (that happenned due to over-indebtness mainly in the mortgage segment) was to lower interest rates to pretty much zero - it weakens the pressure on the entire system to constantly grow merelly to generate the additional wealth needed to pay the interest on the debt.
You’ll also notice that as soon as interest rates went up just a bit bank profits mssivelly grew.
This!
Jesus Christ it’s not all about growth.
Line must go up
This was always going to be limited. Eventually, it doesn’t matter how much data you dump in, it won’t be unique enough to train anything new out of the model.