Can we please get rid of the tech bros too?
Hopefully this means the haters will shut up and we can get on with using it for useful stuff
That would be absolutely amazing. How can we work out a community effort that is designed to teach, you some crowdsource tests maybe we can bring education to the masses for free…
That would indeed be great but completely unrelated to what I said so I suspect you may have answered the wrong person
Now I want the heaters to shut down so we can make some cool s*** too
You’re no no longer using the term Luddite on us! Character development!
Oh you’re a luddite, you’re also a hater and about as intractable and strupid as a trump supporter. You can be many crappy things at once!
Shitty useless pictures each costing kilowatt hours.
I mean, machine learning and AI does have benefits especially in research in the medical field. The consumer AI products are just stupid though.
It’s help me learn coding, Spanish, and helped me build scripts of which I would never have been able to do by myself or with technical works alone.
If we’re talking specifically about the value I get out of what Gpt is right now, its priceless to me. Like my second, albeit braindead, systems administrator on my shoulder when I need something I don’t want to type out myself. And what ever mistakes it makes is within my abilities to repair on my own without fighting for it.
AI didn’t do that. It stole all the information for free on the internet from people who tried to help others and make money of it.
Have you ever used Google translate or apps that identify bugs/plants/songs? AI is used in products you most likely use every week.
You are also arguing for a closed garden system where companies like reddit and Getty get to dictate who can make models and at what price.
Individual are never getting a dime out of this. In a perfect world, governments would be fighting for copyleft licenses for anything using big data but every law being proposed is meant to create a soft monopoly owned by Microsoft and Google and kill open-source.
I can very much so assure you that chatGPT did all of those things for me.
“PIXAR DIDNT MAKE TOY STORY!! THE CHI ARTISTS DID!!!”
I know coding now to a degree. But when I mess something up I’m not going to post to a random forum somewhere to see if someone feels like looking at my problem, and then when they view the issue someone feels the need to include their non objective solution or answer. I don’t want conversation or to be told “see you CoULD do this yourself If you did this”
Like yea that’s cool…but my building just disconnected from the outside world and 1200 people are now expressing their concern, and me being told to just Google it, when my Juniper flipped a bit isnt going to cut it. And Andy in Montana just locked my post because"this question has been answered before" with no elaboration. And spice works has my exact issue but is closed because: problem solved. But they didn’t show their work.
How about this; when books first became widely adopted people bitched that the youth would get lazy. Then it was radio. Then it was television. Then it was the Internet. Then it was social media. Now it’s AI.
The race is always going. But you can stop when ever you feel uncomfortable. But the rest of the pack is going to keep moving to the finish line that never shows up. And new comers can join at any time.
_------------
For Spanish learning, I can now have full endless conversation with something and it never gets tired. It never stops being objective. Since the task is so simple it never fucks up or hallucinates. It never tells me it has other things to do. It never discourages our demeans when I get something wrong. Infact it even plays along with what ever speed or level of language I need it to, such as kindergarten level or elementary level. And all of this is supplemental to actually learning through other means. Try to get that consistency on reddit. Whether that be speed, integrity or volition.
Your suggestion works in 2015 when cleverbot was around or when siri was a creature comfort but it’s 10 years later.
Oh and all of what I mentioned is free - to me.
No, no, and also no. Try again? Or cram your face into a blender? Either is good with me
Are you ok? Too long in the sun?
Bit tired (had to get up too early today) but otherwise okay, thanks. How’s your face? Blended to a fine paste yet?
The pictures aren’t very good I’ll grant you that, but they definitely don’t require even one kWh per image, and besides that basically everything made with a computer costs power. We waste power on nonsense just fine without the help of LLMs or diffusion models.
So should we be fearing a new crash?
Do you have money and/or personal emotional validation tied up in the promise that AI will develop into a world-changing technology by 2027? With AGI in everyone’s pocket giving them financial advice, advising them on their lives, and romancing them like a best friend with Scarlett Johansson’s voice whispering reassurances in your ear all day?
If you are banking on any of these things, then yeah, you should probably be afraid.
Argh, after 25 years in tech I am surprised this keeps surprising you.
We’ve crested for sure. AI isn’t going to solve everything. AI stock will fall. Investor pressure to put AI into everything will subside.
The we will start looking at AI as a cost benefit analysis. We will start applying it where it makes sense. Things will get optimised. Real profit and long term change will happen over 5-10 years. And afterwards, the utter magical will seem mundane while everyone is chasing the next hype cycle.
Truth. I would say the actual time scales will be longer, but this is the harsh, soul-crushing reality that will make all the kids and mentally disturbed cultists on r/singularity scream in pain and throw stones at you. They’re literally planning for what they’re going to do once ASI changes the world to a star-trek, post-scarcity civilization… in five years. I wish I was kidding.
I’m far far more concerned about all the people who were deemed non essential so quickly after being “essential” for so long because AI will do so much work slaps employees with 2 weeks severance
I’m right there with you. One of my daughters love drawing and designing clothes and I don’t know what to tell her in terms of the future. Will human designs be more valued? Less valued?
I’m trying to remain positive; when I went into software my parents barely understood that anyone could make a living of that “toy computer”.
But I agree; this one feels different. I’m hoping they all feel different to the older folks (me).
Have any regular users actually looked at the prices of the “AI services” and what they actually cost?
I’m a writer. I’ve looked at a few of the AI services aimed at writers. These companies literally think they can get away with “Just Another Streaming Service” pricing, in an era where people are getting really really sceptical about subscribing to yet another streaming service and cancelling the ones they don’t care about that much. As a broke ass writer, I was glad that, with NaNoWriMo discount, I could buy Scrivener for €20 instead of regular price of €40. [note: regular price of Scrivener is apparently €70 now, and this is pretty aggravating.] So why are NaNoWriMo pushing ProWritingAid, a service that runs €10-€12 per month? This is definitely out of the reach of broke ass writers.
Someone should tell the AI companies that regular people don’t want to subscribe to random subscription services any more.
As someone dabbling with writing, I bit the bullet and tried to start looking into the tools to see if they’re actually useful, and I was impressed with the promised tools like grammar help, sentence structure and making sure I don’t leave loose ends in the story writing, these are genuinely useful tools if you’re not using generative capability to let it write mediocre bullshit for you.
But I noticed right away that I couldn’t justify a subscription between $20 - $30 a month, on top of the thousand other services we have to pay monthly for, including even the writing software itself.
I have lived fine and written great things in the past without AI, I can survive just fine without it now. If these companies want to actually sell a product that people want, they need to scale back the expectations, the costs and the bloated, useless bullshit attached to it all.
At some point soon, the costs of running these massive LLM’s versus the number of people actually willing to pay a premium for them are going to exceed reasonable expectations and we will see the companies that host the LLM’s start to scale everything back as they try to find some new product to hype and generate investment on.
That’s a good point about the “AI as a service” model that is emerging.
I was reading that NaNoWriMo has had a significant turnover on their board die to the backlash against their pro-AI stance: https://www.cbc.ca/news/entertainment/nanowrimo-ai-controversty-1.7314090
I work for an AI company that’s dying out. We’re trying to charge companies $30k a year and upwards for basically chatgpt plus a few shoddily built integrations. You can build the same things we’re doing with Zapier, at around $35 a month. The management are baffled as to why we’re not closing any of our deals, and it’s SO obvious to me - we’re too fucking expensive and there’s nothing unique with our service.
I don’t think AI is ever going to completely disappear, but I think we’ve hit the barrier of usefulness for now.
I’ve noticed people have been talking less and less about AI lately, particularly online and in the media, and absolutely nobody has been talking about it in real life.
The novelty has well and truly worn off, and most people are sick of hearing about it.
Yeah, now we are gonna get the reality of deep fakes; fun times.
It’s like 3D TVs, for a lot of consumer applications tbh
Oh fuck that’s right, that was a thing.
Goddamn
3D has been a thing every 15 years or so
3D TVs were a commercial fad once and I haven’t seen them in forever.
2016 may have been the end of them
Yes but 3D is always a thing periodically.
I used shutter glasses with two voodoo2 cards…
I used shutter glasses with the sega master system back in 87. They were rad af
The hype is still percolating, at least among the people I work with and at the companies of people I know. Microsoft pushing Copilot everywhere makes it inescapable to some extent in many environments, there’s people out there who have somehow only vaguely heard of ChatGPT and are now encountering LLMs for the first time at work and starting the hype cycle fresh.
Thank fucking god.
I got sick of the overhyped tech bros pumping AI into everything with no understanding of it…
But then I got way more sick of everyone else thinking they’re clowning on AI when in reality they’re just demonstrating an equal sized misunderstanding of the technology in a snarky pessimistic format.
The tech bros had to find an excuse to use all the GPUs they got for crypto after they bled that dry
If that’s the reason, I wouldn’t even be mad, that’s recycling right there.
The tech bros had to find an excuse to use all the GPUs they got for crypto after they
bled that dryupgraded to proof-of-stake.I don’t see a similar upgrade for “AI”.
And I’m not a fan of BTC but $50,000+ doesn’t seem very dry to me.
No, it’s when people realized it’s a scam
As I job-hunt, every job listed over the past year has been “AI-drive [something]” and I’m really hoping that trend subsides.
“This is an mid level position requiring at least 7 years experience developing LLMs.” -Every software engineer job out there.
Reminds me of when I read about a programmer getting turned down for a job because they didn’t have 5 years of experience with a language that they themselves had created 1 to 2 years prior.
Yeah, I’m a data engineer and I get that there’s a lot of potential in analytics with AI, but you don’t need to hire a data engineer with LLM experience for aggregating payroll data.
there’s a lot of potential in analytics with AI
I’d argue there is a lot of potential in any domain with basic numeracy. In pretty much any business or institution somebody with a spreadsheet might help a lot. That doesn’t necessarily require any Big Data or AI though.
That was cloud 7 years ago and blockchain 4
I’m more annoyed that Nvidia is looked at like some sort of brilliant strategist. It’s a GPU company that was lucky enough to be around when two new massive industries found an alternative use for graphics hardware.
They happened to be making pick axes in California right before some prospectors found gold.
And they don’t even really make pick axes, TSMC does. They just design them.
They just design them.
It’s not trivial though. They also managed to lock dev with CUDA.
That being said I don’t think they were “just” lucky, I think they built their luck through practices the DoJ is currently investigating for potential abuse of monopoly.
Yeah CUDA, made a lot of this possible.
Once crypto mining was too hard nvidia needed a market beyond image modeling and college machine learning experiments.
Imo we should give credit where credit is due and I agree, not a genius, still my pick is a 4080 for a new gaming computer.
They didn’t just “happen to be around”. They created the entire ecosystem around machine learning while AMD just twiddled their thumbs. There is a reason why no one is buying AMD cards to run AI workloads.
I feel like for a long time, CUDA was a laser looking for a problem.
It’s just that the current (AI) problem might solve expensive employment issues.
It’s just that C-Suite/managers are pointing that laser at the creatives instead of the jobs whose task it is to accumulate easily digestible facts and produce a set of instructions. You know, like C-Suites and middle/upper managers do.
And NVidia have pushed CUDA so hard.AMD have ROCM, an open source cuda equivalent for amd.
But it’s kinda like Linux Vs windows. NVidia CUDA is just so damn prevalent.
I guess it was first. Cuda has wider compatibility with Nvidia cards than rocm with AMD cards.
The only way AMD can win is to show a performance boost for a power reduction and cheaper hardware. So many people are entrenched in NVidia, the cost to switching to rocm/amd is a huge gambleOne of the reasons being Nvidia forcing unethical vendor lock in through their licensing.
Go ahead and design a better pickaxe than them, we’ll wait…
Go ahead and design a better pickaxe than them, we’ll wait…
Same argument:
“He didn’t earn his wealth. He just won the lottery.”
“If it’s so easy, YOU go ahead and win the lottery then.”
My fucking god.
“Buying a lottery ticket, and designing the best GPUs, totally the same thing, amiriteguys?”
His engineers built it, he didn’t do anything there
In the sense that it’s a matter of being in the right place at the right time, yes. Exactly the same thing. Opportunities aren’t equal - they disproportionately effect those who happen to be positioned to take advantage of them. If I’m giving away a free car right now to whoever comes by, and you’re not nearby, you’re shit out of luck. If AI didn’t HAPPEN to use massively multi-threaded computing, Nvidia would still be artificial scarcity-ing themselves to price gouging CoD players. The fact you don’t see it for whatever reason doesn’t make it wrong. NOBODY at Nvidia was there 5 years ago saying “Man, when this new technology hits we’re going to be rolling in it.” They stumbled into it by luck. They don’t get credit for forseeing some future use case. They got lucky. That luck got them first mover advantage. Intel had that too. Look how well it’s doing for them. Nvidia’s position over AMD in this space can be due to any number of factors… production capacity, driver flexibility, faster functioning on a particular vector operation, power efficiency… hell, even the relationship between the CEO of THEIR company and OpenAI. Maybe they just had their salespeople call first. Their market dominance likely has absolutely NOTHING to do with their GPU’s having better graphics performance, and to the extent they are, it’s by chance - they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.
they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.
This is the part that’s flawed. They have actively targeted neural network applications with hardware and driver support since 2012.
Yes, they got lucky in that generative AI turned out to be massively popular, and required massively parallel computing capabilities, but luck is one part opportunity and one part preparedness. The reason they were able to capitalize is because they had the best graphics cards on the market and then specifically targeted AI applications.
Well, they also kept telling investors all they need to simulate a human brain was to simulate the amount of neurons in a human brain…
The stupidly rich loved that, because they want computer backups for “immortality”. And they’d dump billions of dollars into making that happen
About two months ago tho, we found out that the brain uses microtubules in the brain to put tryptophan into super position, and it can maintain that for like a crazy amount of time, like longer than we can do in a lab.
The only argument against a quantum component for human consciousness, was people thought there was no way to have even just get regular quantum entanglement in a human brain.
We’ll be lucky to be able to simulate that stuff in 50 years, but it’s probably going to be even longer.
Every billionaire who wanted to “live forever” this way, just got aged out. So they’ll throw their money somewhere else now.
I used to follow the Penrose stuff and was pretty excited about QM as an explanation of consciousness. If this is the kind of work they’re reaching at though. This is pretty sad. It’s not even anything. Sometimes you need to go with your gut, and my gut is telling me that if this is all the QM people have, consciousness is probably best explained by complexity.
https://ask.metafilter.com/380238/Is-this-paper-on-quantum-propeties-of-the-brain-bad-science-or-not
Completely off topic from ai, but got me curious about brain quantum and found this discussion. Either way, AI still sucks shit and is just a shortcut for stealing.
That’s a social media comment from some Ask Yahoo knockoff…
Like, this isn’t something no one is talking about, you don’t have to solely learn about that from unpopular social media sites (including my comment).
I don’t usually like linking videos, but I’m feeling like that might work better here
https://www.youtube.com/watch?v=xa2Kpkksf3k
But that PBS video gives a really good background and then talks about the recent discovery.
some Ask Yahoo knockoff…
AskMeFi predated Yahoo Answers by several years (and is several orders of magnitude better than it ever was).
And that linked accounts last comment was advocating for Biden to stage a pre-emptive coup before this election…
https://www.metafilter.com/activity/306302/comments/mefi/
It doesn’t matter if it was created before Ask Yahoo or if it’s older.
It’s random people making random social media comments, sometimes stupid people make the rare comment that sounds like they know what they’re talking about. And I already agreed no one had to take my word on it either.
But that PBS video does a really fucking good job explaining it.
Cuz if I can’t explain to you why a random social media comment isn’t a good source, I’m sure as shit not going to be able to explain anything like Penrose’s theory on consciousness to you.
It doesn’t matter if it was created before Ask Yahoo or if it’s older.
It does if you’re calling it a “knockoff” of a lower-quality site that was created years later, which was what I was responding to.
Great.
So the social media site is older than I thought, and the person who made the comment on that site is a lot stupider than it seemed.
Like, Facebooks been around for about 20 years. Would you take a link to a Facebook comment over PBS?
My man, I said nothing about the science or the validity of that comment, just that it’s wrong to call Ask MetaFilter “some Ask Yahoo knockoff”. If you want to get het up about an argument I never made, you do you.
A.I., Assumed Intelligence
More like PISS, a Plagiarized Information Synthesis System
Welp, it was ‘fun’ while it lasted. Time for everyone to adjust their expectations to much more humble levels than was promised and move on to the next sceme. After Metaverse, NFTs and ‘Don’t become a programmer, AI will still your job literally next week!11’, I’m eager to see what they come up with next. And with eager I mean I’m tired. I’m really tired and hope the economy just takes a damn break from breaking things.
I just hope I can buy a graphics card without having to sell organs some time in the next two years.
Don’t count on it. It turns out that the sort of stuff that graphics cards do is good for lots of things, it was crypto, then AI and I’m sure whatever the next fad is will require a GPU to run huge calculations.
I’m sure whatever the next fad is will require a GPU to run huge calculations.
I also bet it will, cf my earlier comment on rendering farm and looking for what “recycles” old GPUs https://lemmy.world/comment/12221218 namely that it makes sense to prepare for it now and look for what comes next BASED on the current most popular architecture. It might not be the most efficient but probably will be the most economical.
AI is shit but imo we have been making amazing progress in computing power, just that we can’t really innovate atm, just more race to the bottom.
——
I thought capitalism bred innovation, did tech bros lied?
/s
If there is even a GPU being sold. It’s much more profitable for Nvidia to just make compute focused chips than upgrading their gaming lineup. GeForce will just get the compute chips rejects and laptop GPU for the lower end parts. After the AI bubble burst, maybe they’ll get back to their gaming roots.
My RX 580 has been working just fine since I bought it used. I’ve not been able to justify buying a new (used) one. If you have one that works, why not just stick with it until the market gets flooded with used ones?
I’d love an upgrade for my 2080 TI, really wish Nvidia didn’t piss off EVGA into leaving the GPU business…
But if it doesn’t disrupt it isn’t worth it!
/s
move on to the next […] eager to see what they come up with next.
That’s a point I’m making in a lot of conversations lately : IMHO the bubble didn’t pop BECAUSE capital doesn’t know where to go next. Despite reports from big banks that there is a LOT of investment for not a lot of actual returns, people are still waiting on where to put that money next. Until there is such a place, they believe it’s still more beneficial to keep the bet on-going.
Wall Street has already milked “the pump” now they short it and put out articles like this
The fact that is is from LA Times shows that it’s still significant though
I’m just praying people will fucking quit it with the worries that we’re about to get SKYNET or HAL when binary computing would inherently be incapable of recreating the fast pattern recognition required to replicate or outpace human intelligence.
Moore’s law is about similar computing power, which is a measure of hardware performance, not of the software you can run on it.
Unfortunately it’s part of the marketing, thanks OpenAI for that “Oh no… we can’t share GPT2, too dangerous” then… here it is. Definitely interesting then but now World shattering. Same for GPT3 … but through exclusive partnership with Microsoft, all closed, rinse and repeat for GPT4. It’s a scare tactic to lock what was initially open, both directly and closing the door behind them through regulation, at least trying to.
My only real hope out of this is that that copilot button on keyboards becomes the 486 turbo button of our time.
Meaning you unpress it, and computer gets 2x faster?
Actually you pressed it and everything got 2x slower. Turbo was a stupid label for it.
I could be misremembering but I seem to recall the digits on the front of my 486 case changing from 25 to 33 when I pressed the button. That was the only difference I noticed though. Was the beige bastard lying to me?
Lying through its teeth.
There was a bunch of DOS software that runs too fast to be usable on later processors. Like a Rouge-like game where you fly across the map too fast to control. The Turbo button would bring it down to 8086 speeds so that stuff is usable.
Damn. Lol I kept that turbo button down all the time, thinking turbo = faster. TBF to myself it’s a reasonable mistake! Mind you, I think a lot of what slowed that machine was the hard drive. Faster than loading stuff from a cassette tape but only barely. You could switch the computer on and go make a sandwich while windows 3.1 loads.
Oh, yeah, a lot of people made that mistake. It was badly named.
That’s… the same thing.
Whops, I thought you were responding to the first child comment.
I was thinking pressing it turns everything to shit, but that works too. I’d also accept, completely misunderstood by future generations.
Well now I wanna hear more about the history of this mystical shit button
Back in those early days many applications didn’t have proper timing, they basically just ran as fast as they could. That was fine on an 8mhz cpu as you probably just wanted stuff to run as fast as I could (we weren’t listening to music or watching videos back then). When CPUs got faster (or it could be that it started running at a multiple of the base clock speed) then stuff was suddenly happening TOO fast. The turbo button was a way to slow down the clock speed by some amount to make legacy applications run how it was supposed to run.
Most turbo buttons never worked for that purpose, though, they were still way too fast Like, even ignoring other advances such as better IPC (or rather CPI back in those days) you don’t get to an 8MHz 8086 by halving the clock speed of a 50MHz 486. You get to 25MHz. And practically all games past that 8086 stuff was written with proper timing code because devs knew perfectly well that they’re writing for more than one CPU. Also there’s software to do the same job but more precisely and flexibly.
It probably worked fine for the original PC-AT or something when running PC-XT programs (how would I know our first family box was a 386) but after that it was pointless. Then it hung on for years, then it vanished.