Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

One thing I can’t understand is the level of acrimony toward LLMs. I see things like “stochastic parrot”, “glorified autocomplete”, etc. If you need an example, the comments section for the post on Apple saying LLMs don’t reason is a doozy of angry people: https://infosec.pub/post/29574988

While I didn’t expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

  • Brotha_Jaufrey@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    AI becoming much more widespread isn’t because it’s actually that interesting. It’s all manufactured. Forcibly shoved into our faces. And for the negative things AI is capable of, I have an uneasy feeling about all this.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3)

    It’s ironic that you describe your impression of LLMs in emotional terms.

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    My biggest issue is with how AI is being marketed, particularly by Apple. Every single Apple Intelligence commercial is about a mediocre person who is not up to the task in front of them, but asks their iPhone for help and ends up skating by. Their families are happy, their co-workers are impressed, and they learn nothing about how to handle the task on their own the next time except that their phone bailed their lame ass out.

    It seems to be a reflection of our current political climate, though, where expertise is ignored, competence is scorned, and everyone is out for themselves.

  • Epzillon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 hours ago

    Ethics and morality does it for me. It is insane to steal the works of millions and re-sell it in a black box.

    The quality is lacking. Literally hallucinates garbage information and lies, which scammers now weaponize (see Slopsquatting).

    Extreme energy costs and environmental damage. We could supply millions of poor with electricity yet we decided a sloppy AI which cant even count letters in a word was a better use case.

    The AI developers themselves dont fully understand how it works or why it responds with certain things. Thus proving there cant be any guarantees for quality or safety of AI responses yet.

    Laws, juridical systems and regulations are way behind, we dont have laws that can properly handle the usage or integration of AI yet.

    Do note: LLM as a technology is fascinating. AI as a tool become fantastic. But now is not the time.

  • Saleh@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 hours ago

    I recently had an online event about using “AI” in my industry, construction.

    The presentor finished on “Now is no the time to wait, but to get doing, lest you want to stay behind”.

    She gave examples of some companies she found that promised to help with “AI” in the process of designing constructions. When i asked her, if any of these companies are willing to take the legal risk that the designs are up to code and actually sound from an engineering perspective, she had to deny.

    This sums it up for me. You get sold a hype by people who dont understand (or dont tell) what it is and isnt to managers who dont understand what it is and isnt over the heads of people who actually understand what it is or at least what it needs to be to be relevant. And these last people then get laid off or f*ed over in other ways as they have twice the work than before because now first they need to show to management why the “AI” result is criminal and then do all the regular design work anyways.

    It is the same toxid dynamic like with any tech bro hype before. Just now it seems to look good at first and is more difficult to show why it is not.

    This is especially dangerous when it comes to engineering.

  • wolf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 hours ago

    I am in software and a software engineer, but the least of my concerns is being replaced by an LLM any time soon.

    • I don’t hate LLMs, they are just a tool and it does not make sense at all to hate a LLM the same way it does not make sense to hate a rock

    • I hate the marketing and the hype for several reasons:

      • You use the term AI/LLM in the posts title: There is nothing intelligent about LLMs if you understand how they work
      • The craziness about LLMs in the media, press and business brainwashes non technical people to think that there is intelligence involved and that LLMs will get better and better and solve the worlds problems (possible, but when you do an informed guess, the chances are quite low within the next decade)
      • All the LLM shit happening: Automatic translations w/o even asking me if stuff should be translated on websites, job loss for translators, companies hoping to get rid of experienced technical people because LLMs (and we will have to pick up the slack after the hype)
      • The lack of education in the population (and even among tech people) about how LLMs work, their limits and their usages…

    LLMs are at the same time impressive (think jump to chat-gpt 4), show the ugliest forms of capitalism (CEOs learning, that every time they say AI the stock price goes 5% up), helpful (generate short pieces of code, translate other languages), annoying (generated content) and even dangerous (companies with the money can now literally and automatically flood the internet/news/media with more bullshit and faster).

    • doctorschlotkin@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 hours ago

      Everything you said is great except for the rock metaphor. It’s more akin to a gun in that it’s a tool made by man that has the capacity to do incredible damage and already has on a social level.

      Guns ain’t just laying around on the ground, nor are LLMs. Rocks however, are, like, it’s practically their job.

      • BestBouclettes@jlai.lu
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        18 hours ago

        LLMs and generative AI will do what social media did to us, but a thousand times worse. All that plus the nightmarish capacity of pattern matching at an industrial scale. Inequalities, repression, oppression, disinformation , propaganda and corruption will skyrocket because of it. It’s genuinely terrifying.

  • blackn1ght@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    I feel like it’s more the sudden overnight hype about it rather than the technology itself. CEOs all around the world suddenly went “you all must use AI and shoe horn it into our product!”. People are fatigued about constantly hearing about it.

    But I think people, especially devs, don’t like big changes (me included), which causes anxiety and then backlash. LLMs have caused quite a big change with the way we go about our day jobs. It’s been such a big change that people are likely worried about what their career will look like in 5 or 10 years.

    Personally I find it useful as a pairing buddy, it can generate some of the boilerplate bullshit and help you through problems, which might have taken longer to understand by trawling through various sites.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 hours ago

      It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.

      The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people’s (or worse, AI’s) code is orders of magnitude harder than writing the same code yourself.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    I doubt most LLM haters have spent that much time thinking about it deeply. It’s part of the package deal you must subscribe to if you want to belong to the group. If you spend time in spaces where the haters are loud and everyone else stays quiet out of fear of backlash, it’s only natural to start feeling like everyone must think this way - so it must be true, and therefore I think this way too.

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    I believe Lemmy naturally attracts many people who are sick of enshitification including the prevalence of ai slobs.

    Those people make very good points and havy very valid fears.

    However don’t let the mob mentality get to you. Some people focus on the current day flaws they fail to see real dangers. While others demonise it so hard they can no longer distinguish good from bad applications.

    There are still many people of all flavors of pro ai here.

    My own take is that 99% of ai i see and hear is crap because this is transit period of praise and disappointment. But i also see massive positive potential a few decades from now.

    Also that negative potential of course but the world has so much negative potential being run by flawed corruptible humans already its a maybe we win or we definitely lose all situation.

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    My main issue is that LLMs are being used to flood the internet with AI slop. Almost every time I search for something, I have to go through a lot of results to find one with any usable information. The SEO spam before AI was bad enough, now it’s significantly worse.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    I think a lot of ground has been covered. It’s a useful technology that has been hyped to be way more than it is, and the really shitty part is a lot of companies are trying to throw away human workers for AI because they are that fucking stupid or that fucking greedy (or both).

    They will fail, for the most part, because AI is a tool your employees use, they aren’t a thing to foist onto your customers. Also where do the next generation of senior developers come from if we replace junior developers with AI? Substitute in teachers, artists, copy editors, others.

    Add to that people who are too fucking stupid to understand AI deciding it needs to be involved in intelligence, warfare, police work.

    I frequently disagree with the sky is falling crowd. AI use by individuals, particularly local AI (though it’s not as capable) is democratizing. I moved from windows to Linux two years ago and I couldn’t have done that if I hadn’t had AI to help me troubleshoot a bunch of issues I had. I use it all the time at work to leverage my decades of experience in areas where I’d have to relearn a bunch of things from scratch. I wrote a Python program in a couple of hours having never written a line before because I knew what questions to ask.

    I’m very excited for a future with LLMs helping us out. But everyone is fixated on AI gen (image, voice, text) but it’s not great at that. What it excels at is very quickly giving feedback. You have to be smart enough to know when it’s full of shit. That’s why vibe coding is a dead end. I mean it’s cool that very simple things can be churned out by very inexperienced developers, but that has a ceiling. An experienced developer can also leverage it to do more faster at a higher level, but there is a ceiling there as well. Human input and knowledge never stops being essential.

    So welcome to Lemmy and discussion about AI. You have to be prepared for knee-jerk negativity, and the ubiquitous correction when you anthropomorphize AI as a shortcut to make your words easier to read. There isn’t usually too much overtly effusive praise here as that gets shut down really quickly, but there is good discussion to be had among enthusiasts.

    I find most of the things folks hate about AI aren’t actually the things I do with it, so it’s easy to not take the comments personally. I agree that ChatGPT written text is slop and I don’t like it as writing. I agree AI art is soulless. I agree distributing AI generated nudes of someone is unethical (I could give a shit what anyone jerks off to in private). I agree that in certain niches, AI is taking jobs, even if I think humans ultimately do the jobs better. I do disagree that AI is inherently theft and I just don’t engage with comments to that effect. It’s unsettled law at this point and I find it highly transformative, but that’s not a question anyone can answer in a legal sense, it’s all just strongly worded opinion.

    So discussions regarding AI are fraught, but there is plenty of good discourse.

    Enjoy Lemmy!

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    21 hours ago

    You’ll find a different prevailing mood in different communities here on Lemmy. The people in the technology community (the example you gave) are fed up with talking about AI all day, each day. They’d like to talk about other technology at times and that skews the mood. At least that’s what I’ve heard some time ago… Go to a different community and discuss AI there and you’ll find it’s a different sentiment and audience there. (And in my opinion it’s the right thing to do anyway. Why discuss everything in this community, and not in the ones dedicated to the topic?)

  • muelltonne@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    LLMs are an awesome technology. They have their flaws. The companies behind them are totally unethical. The hype is insane and it is insane how many crappy AI integrations are popping up everywhere. Business models are in many cases not there. There is a real fear of job loss. But this tech is here to stay and you can do awesome thing with it. People totally misunderstand the whole energy usage issue. People are abusing ChatGPT & Co for things it is not build for and OpenAI actively encourages them.

    But I really think that this community here has gone too much in the direction of AI hate. Even if somebody posts a great and substantial article, it will get downvoted because AI is in the title. And I really would like to discuss current AI here without people simply downvoting everything they do not like without having read the article

  • Fizz@lemmy.nz
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 hours ago

    The main reason the invoke an emotional response. They stole everything from us (humans) illegally and then used it to make a technology that aims to replace us. I dont like that.

    The second part is that I think they are shit at what people are using them for. They seem like they provide great answers but they are far to often completely wrong and the user doesnt know. Its also annoying that they are being shoved into everything.

    • ToastedRavioli@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      21 hours ago

      Google AI recently told me that capybaras and caimans have a symbiotic relationship where the caimans protect them so they can eat their feces