I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?
Cause it’s cool
Not to me. If you like it, that’s fine.
Perhaps your personal bias is cluding your judgement a bit here. You don’t seem very open minded about it. You’ve already made up your mind.
Probably but I’m far from the only one.
This is like saying that automobiles are overhyped because they can’t drive themselves. When I code up a new algorithm at work, I’m spending an hour or two whiteboarding my ideas, then the rest of the day coding it up. AI can’t design the algorithm for me, but if I can describe it in English, it can do the tedious work of writing the code. If you’re just using AI as a Google replacement, you’re missing the bigger picture.
I’m retired. I don’t do all that stuff.
Maybe look into the creativity side more and less ‘Google replacement’?
I’ll see if I can think of something creative to do. I was just reading an article from MIT that pointed out that one reason AI is bad at search is that it can’t determine whether a source is accurate. It can’t tell the difference between Reddit and Harvard.
Neither can most of reddit…
The hype machine said we could use it in place of search engines for intelligent search. Pure BS.
Yes. Far more useful to embrace its hallucinogenic qualities…
A lot of people are doing work that can be automated in part by AI, and there’s a good chance that they’ll lose their jobs in the next few years if they can’t figure out how to incorporate it into their workflow. Some people are indeed out of the workforce or in industries that are safe from AI, but that doesn’t invalidate the hype for the rest of us.
As a beginner in self hosting I like plugging the random commands I find online into a llm. I ask it what the command does, what I’m trying to achieve and if it would work…
It acts like a mentor, I don’t trust what it says entirely so I’m constantly sanity checking it, but it gets me to where I want to go with some back and forth. I’m doing some of the problem solving, so there’s that exercise, it also teaches me what commands do and how the flags alter it. It’s also there to stop me making really stupid mistakes that I would have learned the hard way without.
Last project was adding a HDD to my zpool as a mirror. I found the “attach” command online with a bunch of flags. I made what I thought was my solution and asked chatgpt. It corrected some stuff: I didn’t include the name of my zpool. Then gave me a procedure to do it properly.
In that procedure I noticed an inconsistency in how I was naming drives vs how my zpool was naming drives. Asked chat gpt again, I was told I was a dumbass, if thats the naming convention I should probably use that one instead of mine (I was using /dev/sbc and the zpool was using /dev/disk/by-id/). It told me why the zpool might have been configured that way so that was a teaching moment, I’m using usb drives and the zpool wants to protect itself if the setup gets switched around. I clarified the names and rewrote the command, not really chatgpt was constantly updating the command as we went… Boom I have mirrored my drives, I’ve made all my stupid mistakes in private and away from production, life is good.
A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.
It depends on the task you give it and the instructions you provide. I wrote this a while back i find it gives a 10x in capability especially if u use a non aligned llm like dolphin 8x22b.
I have no idea what any of that means. But thanks for the reply.
Novelty, lack of understanding, and avarice.
Robots don’t demand things like “fair wages” or “rights”. It’s way cheaper for a corporation to, for example, use a plagiarizing artificial unintelligence to make images for something, as opposed to commissioning a human artist who most likely will demand some amount of payment for their work.
Also I think that it’s partially caused by people going “ooh, new thing!” without stopping to think about the consequences of this technology or if it is actually useful.
Disclaimer: I’m going to ignore all moral questions here
Because it represents a potentially large leap in the types of problems we can solve with computers. Previously the only comparable tool we had to solve problems were algorithms, which are fast, well-defined, and repeatable, but cannot deal with arbitrary or fuzzy inputs in a meaningful way. AI excels at dealing with fuzzy inputs (including natural language, which was a huge barrier previously), at the expense of speed and reliability. It’s basically an entire missing half to our toolkit.
Be careful not to conflate AI in general with LLMs. AI is usually implemented as Machine Learning, which is a method of fitting an output to training data. LLMs are a specific instance of this that are trained on language (hence, large language models). I suspect that if AI becomes more widely adopted, most users will be interacting with LLMs like you are now, but most of the business benefit would come from classifiers that have a more restricted input/output space. As an example, you could use ML to train an AI that can be used to detect potentially suspicious bank transactions. The more data you have to sort through, the better AI can learn from it*, so I suspect the companies that have been collecting terabytes of data will start using AI to try to analyze it. I’m curious if that will be effective.
*technically it depends a lot on the training parameters
I suppose it depends on the data you’re using it for. I can see a computer looking through stacks data in no time.
There is no artificial intelligence, just very large statistical models.
Artificial intelligence is a branch of computer science. Of which, LLMs are objectively a part of.
It’s easier for the marketing department. According to an article, it’s neither artificial nor intelligent.
In what way is it not artificial
Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.
Is human intelligence artificial? #philosophy
Well, using the definition that artificial means man made then no. Human intelligence wasn’t made by humans therefore it isn’t artificial.
I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn’t naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment–and, depending on the language, being able to distinguish different colors.
From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it’s the truth when I don’t actually know. How often do humans actually have an original thought? Most of the time we’re just regurgitating things that we’ve experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.
I’m generally familiar with “artificial” to mean “human-created”
Humans created cars and cars are real. I tried to get some info from the Wired article but they pawalled me.
“Artificial” doesn’t mean “fake”, it usually means “human made”
That’s what Gemini said.
Found a link to Kate Crawford’s research. The quote is near the bottom of the article. It’s interesting, anyway.
When will people finally stop parroting this sentence? It completely misses the point and answers nothing.
It amazed people when it first launched and capitalists took that to mean replace all their jobs with AI. Where we wanted AI to make shit jobs easier, they used it to replace whole swaths of talent across the industry’s. Recent movies read like they were written almost entirely by AI. Like when Cartman was a robot and kept giving out terrible movie ideas.
Rich assholes have spent a ton of money on it and they need to manufacture reasons why that wasn’t a waste.
Who’s making you use it?
It’s useful for lots of things, but it requires a proof reader.
I try to do a search on Chrome and Gemini pops up and start spewing its BS. I go into messages and I try to send a message and gemini pops up and asks me if it wants to send a message for me. No I know how to write my own stupid messages. It’s all integrated into Windows 11, Is integrated into the Bing app. It’s like swatting flies trying to get rid of it.
Move to linux use self hosted foss alternatives. Regain ownership of your digital existance. Stop being a slave to the big tech machine.
Computer? What could this strange device be? Another toy that helped destroy the elder race of man?
I only have a phone
Damn u can rent a vps and ssh from ur phone?
A what now with a thingy?
Ye the flumberboozle
The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.
There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.
Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.
Interestingly, the turing test has been passed by much dumber things than LLMs
I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.
True!
Sounds about right
I think there’s a lot of armchair simplification going on here. Easy to call investors dumb but it’s probably a bit more complex.
AI might not get better than where it is now but if it does, it has the power to be a societally transformative tech which means there is a boatload of money to be made. (Consider early investors in Amazon, Microsoft, Apple and even the much derided Bitcoin.)
Then consider that until incredibly recently, the Turing test was the yardstick for intelligence. We now have to move that goalpost after what was preciously unthinkable happened.
And in the limited time with AI, we’ve seen scientific discoveries, terrifying advancements in war and more.
Heck, even if AI gets better at code (not unreasonable, sets of problems with defined goals/outputs etc, even if it gets parts wrong shrinking a dev team of obscenely well paid engineers to maybe a handful of supervisory roles… Well, like Wu Tang said, Cash Rules Everything Around Me.
Tl;dr: huge possibilities, even if there’s a small chance of an almost infinite payout, that’s a risk well worth taking.
I’ll just toss in another answer nobody has mentioned yet:
Terminator and Matrix movies were really, really popular. This sort of seeded the idea of it being a sort of inevitable future into the brains of the mainstream population.
The Matrix was a documentary