US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
deleted by creator
They’re right. What happens to the workers when they’re no longer required? The horses faced a similar issue at the advent of the combustion engine. The solution? Considerably fewer horses.
the same could be applied to humans… but then who would buy consumer goods?
In all seriousness though the only solution is for the cost of living to go down and for a UBI to exist so that the average person can choose to not work and strikes are a legitimate threat to business because they can more feasibly last for months.
What’s the point of producing goods for “useless eaters”?
money
They won’t have any money.
But as for the people who worked with horses, I’m pretty sure they found different jobs - it’s not like they were sent to a glue factory.
Of course, they learned to code.
And became influencers
The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?
Android Messages and Facebook Messenger also pushed in AI as ‘something you can chat with’
I’m not here to talk to your fucking chatbot I’m here to talk to my friends and family.
deleted by creator
Right?! It’s literally just a messenger, honestly, all I expect from it is that it’s an easy and reliable way of sending messages to my contacts. Anything else is questionable.
There are exactly 0 good reasons to use whatsapp anyways…
Yes, there are. You just have to live in one of the many many countries in the world where the overwhelming majority of the population uses whatsapp as their communication app. Like my country. Where not only friends and family, but also businesses and government entities use WhatsApp as their messaging app. I have at least a couple hundred reasons to use WhatsApp, including all my friends, all my family members, and all my clients at work. Do I like it? Not really. Do I have a choice? No. Just like I don’t have a choice on not using gmail, because that’s the email provider that the company I work for decided to go with.
SMS works fine in any country.
And you can isolate your business requirements from your personal life.
I have 47 good reasons. There’s 47 good reasons are that those people in my contact list have WhatsApp and use it as their primary method of communicating.
SMS works fine.
No it doesn’t. It’s slow, can’t send files, can’t send video or images, doesn’t have read receipts or away notifications. Why would I use an inferior tool?
Why do you even care anyway?
Meta directly opposes the collective interests and human rights of all working class people, so I think the better question is how come you don’t care.
There are many good reasons to not use WhatsApp. You’ve already correctly identified 47 of them.
Hardly ever I come across a person more self centered and a bigger fan of virtue signaling as you. You ignored literally everything we said, and your alternative was just “sms”. Even to the point of saying that the other commenter should stop talking to their 47 friends and family members.
Who the fuck needs
anything related to AI onWhatsApp?Lots of people. I need it because it’s how my clients at work prefer to communicate with me, also how all my family members and friends communicate.
Its just going to help industry provide inferior services and make more profit. Like AI doctors.
deleted by creator
Replacing people is a good thing. It means less people do more work. It means progress. It means products and services will get cheaper and more available. The fact that people are being replaced means that AI actually has tremendous value for our society.
Great for people getting fired or finding that now the jobs they used to have that were middle class are now lower class pay or obsolete. They will be so delighted at the progress despite their salaries and employment benefits and opportunities falling.
And it’s so nice that AI is most concentrated in the hands of billionaires who are oh so generous with improving living standards of the commoners. Wonderful.
This is collateral damage of societal progress. This is a phenomenon as old as humanity. You can’t fight it. And it has brought us to where we are now. From cavemen to space explorers.
Oh hey, it’s the Nazi apologist. Big shock you don’t give a fuck about other people’s lives.
You sound really stupid when calling me a Nazi under this comment.
Almost every comment of yours is insulting in some way or the other. I’m starting to think you’re some kind of (Russian) troll and don’t care about contributing anything worthwhile to these threads.
deleted by creator
Which are separate things from people’s ability to financially support themselves.
People can have smartphones and tech the past didn’t have, but be increasingly worse off financially and unable to afford housing.
And you aren’t a space explorer.
I’m not arguing about whether innovation is cool. It is.
I however strongly disagree with your claim that people being replaced is good. That assumes society is being guided with altruism as a cornerstone of motivation to create some Star Trek future to free up people to pursue their interests, but that’s a fantasy. Innovation is simply innovation. It’s not about whether people’s lives will be improved. It doesn’t care.
World can be the most technologically advanced its ever been with space travel for the masses and still be a totalitarian dystopia. People could be poorer than ever and become corpo slaves, but it would fit under the defition of societal progress because of innovation.
People being economically displaced from innovation increasing productivity is good provided it happens at a reasonable place and there is a sufficient social saftey net to get those people back on their feet. Unfortunately those saftey nets dont exist everywhere and have been under attack (in the west) for the past 40 years.
People can have smartphones and tech the past didn’t have, but be increasingly worse off financially and unable to afford housing.
You really have no idea what life was like just two or three generations ago. At least you now have toilet paper, water, can shower, and don’t need to starve to death when the pig in your backyard dies of some illness. Life was FUCKING HARD man. Affording a house is your problem? Really?
And you aren’t a space explorer.
The smoke detector, the microwave and birth control pills were invented around the time when we landed on the moon.
deleted by creator
Whoever the mod was that decided to delete my comment is a fool. This guy above is a Nazi apologist.
What makes you think that? You can’t just go around and insult people personally without elaborating on the reason.
Great for people getting fired or finding that now the jobs they used to have that were middle class are now lower class pay or obsolete. They will be so delighted at the progress despite their salaries and employment benefits and opportunities falling.
This shouldn’t come as a surprise. Everyone who’s suprised by that is either not educated how economy works or how societal progress works. There are always winners and losers but society makes net-positive progress as a whole.
I have no empathy for people losing their jobs. Even if I lose my job, I accept it. It’s just life. Humanity is a really big machine of many gears. Some gears replace others to make the machine run more efficient.
And it’s so nice that AI is most concentrated in the hands of billionaires who are oh so generous with improving living standards of the commoners. Wonderful.
This is just a sad excuse I’m hearing all the time. The moment society gets intense and chang is about to happen, a purpetrator needs to be found. But most people don’t realize that the people at the top change all the time when the economy changes. They die aswell. It’s a dynamic system. And there is no one purpetrator in a dynamic system. The only purpetrator is progress. And progress is like entropy. It always find its way and you cannot stop it. Those who attempt to stop it instead of adapting to it will be crushed.
I have no empathy
for people losing their jobsFTFY
I trust you’ve volunteered for it to replace you then. It being so beneficial to society, and all.
It means less people do more work.
And then those people no longer working… do what, exactly? Fewer well-paying jobs, same number of people, increasing costs. Math not working out here.
The fact that people are being replaced means that AI actually has tremendous value for our society.
Oh, it has value. Just not for society (it could that’s the sad part). For very specific people though, yeah, value. Just got to step on all the little people along the way, like we’ve always done, eh?
Yeah, rather than volunteering its more likely you lack a basic characteristic of humanity some of like to refer to as “empathy” instead. And if – giving you the benefit of the doubt – you’re just a troll… well, my statement stands.
I trust you’ve volunteered for it to replace you then. It being so beneficial to society, and all.
Yes. If I get replaced by something more efficient I accept that. I am no longer worth the position of my job. I will look for something else and try to find ways to apply some of my skillsets in other ways. I may do some further training and education, or just accept a lower paying job if that’s not possible.
And then those people no longer working… do what, exactly? Fewer well-paying jobs, same number of people, increasing costs. Math not working out here.
Can you elaborate? I don’t quiet understand what you mean by that. The people who no longer work need to find something else. There will remain only a fraction that can never find another job again. And that fraction is offset by the increased productivity of society.
Oh, it has value. Just not for society (it could that’s the sad part). For very specific people though, yeah, value. Just got to step on all the little people along the way, like we’ve always done, eh?
Can you specify “specific”? What little people? If you use very vague terminology like that you should back it up with some arguments. I personally see no reason why AI would disadvantage working people any more than the sewing machine did back in the day. Besides, when you think about it you’ll find that defining the terms you used is actually quiet difficult in a rapidly changing economy when you don’t know to whom these terms might apply to in the end.
I have a feeling you’re not actually thinking this through, or at least doing it on a very emotional level. This will not help you adapt to the changing world. The very opposite actually.
Replacing people is a good thing.
Yes, and no: https://www.npr.org/2025/02/11/g-s1-47352/why-economists-got-free-trade-with-china-so-wrong
Butlerian Jihad
When Miyazaki said the AI ghiblifier is an affront to art, I couldn’t help but think that before WW1, tanks were called an affront to horsemanship.
He said it was an affront to life itself.
My bad. At least that doesn’t change my point as tanks are that too.
The problem could be that, with all the advancements in technology just since 1970, all the medical advancements, all the added efficiencies at home and in the workplace, the immediate knowledge-availability of the internet, all the modern conveniences, and the ability to maintain distant relationships through social media, most of our lives haven’t really improved.
We are more rushed and harried than ever, life expectancy (in the US) has decreased, we’ve gone from 1 working adult in most families to 2 working adults (with more than 1 job each), income has gone down. Recreation has moved from wholesome outdoor activities to an obese population glued to various screens and gaming systems.
The “promise of the future” through technological advancement, has been a pretty big letdown. What’s AI going to bring? More loss of meaningful work? When will technology bring fewer working hours and more income - at the same time? When will technology solve hunger, famine, homelessness, mental health issues, and when will it start cleaning my freaking house and making me dinner?
When all the jobs are gone, how beneficial will our overlords be, when it comes to universal basic income? Most of the time, it seems that more bad comes from out advancements than good. It’s not that the advancements aren’t good, it’s that they’re immediately turned to wartime use considerations and profiteering for a very few.
I see it lowering people’s ability to focus and for analytical/critical thinking.
remember when tech companies did fun events with actual interesting things instead of spending three hours on some new stupid ai feature?
I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone’s heads. Basic supply and demand says my skillset will become more valuable.
Someone will need to clean up the ai slop. I’ve already had similar pistons where I was brought into clean up code bases that failed being outsourced.
Ai is simply the next iteration. The problem is always the same business doesn’t know what they really want and need and have no ability to assess what has been delivered.
I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive. AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.
I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.
If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren’t it.
They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.
Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.
Agreed. I would add that not only would job loss be enormous, but many corporations are suddenly going to be competing with individuals armed with the same AI.
A complete random story but, I’m on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren’t tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you’re doing RAG stuff maybe or other things. However, I was “crafting” my prompt and I could not give a shit less about writing a perfect prompt. I’m typically used to coding what I want but I had to find out how to write it properly: “please don’t format it like X”. Like I wasn’t using AI to write code, it was a service endpoint.
During lunch with the AI team, they keep saying things like “we only have 10 years left at most”. I was like, “but if you have AI spit out this code, if something goes wrong … don’t you need us to look into it?” they were like, “yeah but what if it can tell you exactly what the code is doing”. I’m like, “but who’s going to understand what it’s saying …?” “no, it can explain the type of problem to anyone”.
I said, I feel like I’m talking to a libertarian right now. Every response seems to be some solution that doesn’t exist.
AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.
AI isn’t good at doing a lot of other things software engineers actually do. It isn’t very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.
deleted by creator
deleted by creator
I think AI will be useful, but like any nascent technology, it will have to be accessible for the public before the everyman would adopt it. IMO, we are currently at the 2nd or 3rd stage in the picture below.
It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.
Everyone gains from progress. We’ve had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.
Everyone gains from progress.
It’s only true in the long-term. In the short-term (at least some) people do lose jobs, money, and stability unfortunately
That’s true. And that’s why so many people are frustrated. Because the majority is incredibly short-sighted unfortunately. Most people don’t even understand the basics of economics. If everyone was the ant in the anthill they’re supposed to be we would not have half as many conflics as we have.
What progress are you talking about?
We don’t know it yet. I can’t see the future and you neither. But you cannot question the fact that AI has made a lot of things more efficient. And efficiency always brings progress in one way or the other.
Man it must be so cool going through life this retarded. Everything is fine, so many more things are probably interesting….lucky
Your comment doesn’t exactly testify intelligence yourself.
You might want to elaborate on some arguments actually relate to the comment you’re responding to.
And as someone who has extensively set up such systems on their home server… yeah it’s a great google home replacement, nothing more. It’s beyond useless on Powerautomate which I use (unwillingly) at my job. Copilot can’t even parse and match items from two lists. Despite my company trying its damn best to encourage “our own” (chatgpt enterprise) AI, nobody i have talked with has found a use.
You’re using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I’m using it every day. And every colleague I talk to agrees 100%.
I’ll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that’s helpful, but I feel as though my normal search experience wasn’t so polluted with AI written regurgitation of the next result down, I would’ve found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.
If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.
Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.
I think a lot depends on where “on the curve” you are working, too. If you’re out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.
I remember doing my Masters’ thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today’s research landscape that multi-month delay should be compressed to a couple of hours, frequently less.
If you haven’t read Melancholy Elephants, it’s a great reference point for what we’re getting into with modern access to everything:
I’ve found it primarily useless to harmful in my software development, making the work debugging poorly-structured code the major place that time is spent. What sort of software and language do you use it for?
Then you must know something the rest of us don’t. I’ve found it marginally useful, but it leads me down useless rabbit holes more than it helps.
I’m about 50/50 between helpful results and “nope, that’s not it, either” out of the various AI tools I have used.
I think it very much depends on what you’re trying to do with it. As a student, or fresh-grad employee in a typical field, it’s probably much more helpful because you are working well trod ground.
As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you’re screwed as far as the really inventive stuff goes, but… if you’ve read “Surely you’re joking, Mr. Feynman!” there’s a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you’re making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
My issue is that I’m fairly established in my career, so I mostly need to reference things, which LLMs do a poor job at. As in, I usually need links to official documentation, not examples of how to do a thing.
That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
LLMs aren’t catalogs though, and they absolutely return different things for the same query. Search engines are tells catalogs, and they’re what I reach for most of the time.
LLMs are good if I want an intro to a subject I don’t know much about, and they help generate keywords to search for more specific information. I just don’t do that all that much anymore.
If you were too lazy to read three Google search results before, yes… AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.
I rarely get a result from ChatGPT that I couldn’t have skimmed for myself in about twice to five times the time.
I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.
You’re using it wrong. My experience is different from yours. It produces transfer knowledge in the queries I ask it. Not even hundreds of Google searches can replace transfer knowledge.
You’re using it wrong.
Your use case is different from mine.
AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there’s an issue.
I think AI has a lot of untapped potential, and it’s going to be a VERY long time before people who don’t know how to ask it for what they want will be able to communicate what they want to an AI.
A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.
The current drive behind AI is not progress, it’s locking knowledge behind a paywall.
As soon as one company perfects their AI, it will draw everyone to use it, marketing it as ‘time saver’ so you don’t have to do anything (including browsing the web, which is in decline even now). Just ask and you shall receive everything.
Once everyone gets hooked, and there won’t be any competiton left, they will own the population. News, purchase recommendations, learning, everything we do to work on our congitive abilities will be sold through a single vendor.
Suddenly you own the minds of many people, who can’t think for themselves, or search for knowledge on their own… and that’s already happening.
And it’s not the progress I was hoping to see in my lifetime.
being against progress for that reason is just stupid.
Under the current economic model, being against progress is just self-preservation.
Yes, we could all benefit from AI in some glorious future that doesn’t see the AI displaced workers turned into toys for the rich, or forgotten refuse in slums.
We are ants in an anthill. Gears in a machine. Act like it. Stop thinking in classes “rich vs. poor” and conspiracies. When you become obsolete it’s nobody’s fault. This always comes from people who don’t understand how this world economy works.
Progress always comes and finds its way. You can never stop it. Like water in a river. Like entropy. Adapt early instead of desperately forcing against it.
We are ants in an anthill. Gears in a machine. Act like it.
See Woody Allen in AntZ (1998 movie)
Adapt early instead of desperately forcing against it.
There should be a balance. Already today’s world is desperately thrashing to “stay ahead of the curve” and putting outrageous investments into blind alleys that group-think believes is the “next big thing.”
The reality of automation could be an abundance of what we need, easily available to all, with surplus resources available for all to share and contribute to as they wish - within limits, of course.
It’s going to take some desperate forcing to get the resources distributed more widely than they currently are.
I’m not sure at this point. The sewing machine was just automated stitching. It is more similar to Photos and landscape painters, only it is worse.
With the creative AI basically most of the visual art skills went to “I’m going to pay 100$ for AI to do this instead 20K and waiting 30 days for the project”. Soon doctors, therapists and teachers will look down the barrel. “Why pay for one therapy session for 150 or I can have an AI friend for 20 a month”.
In the past you were able to train yourself to use sewing machine or learn how to operate cameras and develop photos. Now I don’t even have any idea where it goes.Machine stitching is objectively worse than hand stitching, but… it’s good enough and so much more efficient, so that’s how things are done now; it has become the norm.
Good enough is the keyword in a lot of things. That’s how fast fashion got this big.
Fast fashion (and everything else in the commercial marketplace) needs to start paying for their externalized costs - starting with landfill space, but also the pollution and possibly social supports that are going into the delivery of their products. But, then, people are stupid when it comes to fashion, they’ll pay all kinds of premiums if it makes them look like their friends.
AI is changing the landscape of our society. It’s only “destroying” society if that’s your definition of change.
But fact is, AI makes every aspect where it’s being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
Are you a trust fund kid or something
Are you a poor kid or something? Like what kind of question even is this? Why does it even need to be personal at all? This thread is not about me…
And no. I’m not. I stand to inherit nothing. I’m still a student. I’m not wealthy or anything like that.
Because you write like you think this can’t reach you, like you’re always going to have food and shelter no matter what happens.
If it reaches me, so be it. That’s life. Survival of the fittest. It’s my own responsibility to do the best in the environment I live in.
AI makes every aspect where it’s being used a lot more productive and easier.
AI makes every aspect where it’s being used well a lot more productive and easier.
AI used poorly makes it a lot easier to produce near worthless garbage, which effectively wastes the consumers’ time much more than any “productivity gained” on the producer side.
I use AI for programming questions, because it’s easier than digging 1h through official docs (if they exists) and frustrating trial and error.
However quite often the ai answers are wrong by inserting nonsense code, using for instead of foreach or trying to access variables that are not always set.
Yes it helps, but it’s usually only 60% right.
I used to do this, but not anymore. The amount of time I have to spend to verify it and correct it sometimes takes longer than if I were just to do it myself, and the paranoia that comes with it isn’t worth the time for me anymore.
The worry is deeper than just different changes in production. Not all progress is good, think of the broken branches of the evolution.
The fact that us don’t teach kids how to write already took a lot of different childhood development and later brain development and memory improvement out of the run.
Qith ai now drawing, writing and music became a single sentence prompt. So why keep all those things? Why literally waste time developing a skill that you can not sell? Sure for fun…
And you are bringing up efficiency. Efficiency is just a buzzword that big companies are using to replace human labor. How much more efficient is a bank where you have 4 machine and one human teller? Or a fast food restaurant where the upfront employee just delivers the food to the counter and you can only place order with a computer.
There is a point where our monkey brains can’t compete and won’t be able to exist without human to human stuff. But I don’t need to worry in 2 years we will be not able to differentiate between ai and humans. And we can just fake that connection for the rest of our efficient lifes.
I’m not against improving stuff, but qhere this is focused won’t help us in the long run…That’s the first interesting argument I’m reading here. Glad someone takes an honest stance in this discussion instead of just “rich vs poor”, “but people will lose jobs” and some random conspiracies in between.
To your comment: I agree with your sentiment that AI will make it challenging for new brains to evolve as solving difficult tasks is a problem we will encounter much less in the future. I actually never thought about it that way. I don’t have a solution for that. I think it will have two outcomes: humans will lose intelligence, or humans will develop different intelligence in a way that we don’t understand yet today.
And you are bringing up efficiency. Efficiency is just a buzzword that big companies are using to replace human labor. How much more efficient is a bank where you have 4 machine and one human teller? Or a fast food restaurant where the upfront employee just delivers the food to the counter and you can only place order with a computer.
I disagree with that. Efficiency is a universal term. And humanity has always striven to do things more efficient because it increases the likelihood of survival and quality of life in general. It’s a very natural thing and you cannot stop it. Much as you cannot stop entropy. Also, I think making things more efficient is good for society. Everything becomes easier, more available, and more fun. I can see a far future where humans no longer need to work and can do whatever they want with their day. Jobs will become hobbies and family and friends are what you care about most.
I do not agree that efficiency is good.
If its is good, we would live like we keep pigs and chickens in meat farms. More efficient is to eat bug based protein, and why waste time on eating instead of 100% meal replacement foods.
Why keep people with disabilities or with different “colors of skin” (insert any other thing there) from the most “efficient” ones?
The best way to think is Matrix-esqe pods for humans and living in a simulation.
Only bad part of that picture is that we are not needed at all.And these are the dark points of unlimited change.
We all know capitalism is very bad for the majority. We know big money do not care about marginalized groups. These are all just numbers. And at the end you and I we are all numbers that can be cut. I’m probably not going to be alive, but I hope for a bright future for the upcoming generations. The problem is that I do see AI potentially darkening their skies.
Don’t get me wrong AI can be a great tool if you learn how to use it. But the benefits are not going to be in the people hands.We need a general society overhaul where not the profit is the only thing that matters. Efficiency is good when you burn renewable wooden pellets and you want to get the most out of the chemical reaction. Efficiency is good when you are using the minimum amount of material to build something (with 3x oversized safety measures). But efficiency in AI and in social terms are going to be a problem.
Humans will not have worry free lives in current society. All the replaced labor keeps the earnings in the stockholders hands. But this went really far from AI. Sorry for the rant, but I do worry for the future.
I believe blindly accepting something before even attempting to look into the pitfalls not a great idea. And we never see all the pitfalls coming.
AI has it’s place, but they need to stop trying to shoehorn it into anything and everything. It’s the new “internet of things” cramming of internet connectivity into shit that doesn’t need it.
deleted by creator
You’re saying the addition of Copilot into MS Paint is anything short of revolutionary? You heretic.
Just about every major advance in technology like this enhanced the power of the capitalists who owned it and took power away from the workers who were displaced.
I mean, it hasn’t thus far.