Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

One thing I can’t understand is the level of acrimony toward LLMs. I see things like “stochastic parrot”, “glorified autocomplete”, etc. If you need an example, the comments section for the post on Apple saying LLMs don’t reason is a doozy of angry people: https://infosec.pub/post/29574988

While I didn’t expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

  • wolf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 hours ago

    I am in software and a software engineer, but the least of my concerns is being replaced by an LLM any time soon.

    • I don’t hate LLMs, they are just a tool and it does not make sense at all to hate a LLM the same way it does not make sense to hate a rock

    • I hate the marketing and the hype for several reasons:

      • You use the term AI/LLM in the posts title: There is nothing intelligent about LLMs if you understand how they work
      • The craziness about LLMs in the media, press and business brainwashes non technical people to think that there is intelligence involved and that LLMs will get better and better and solve the worlds problems (possible, but when you do an informed guess, the chances are quite low within the next decade)
      • All the LLM shit happening: Automatic translations w/o even asking me if stuff should be translated on websites, job loss for translators, companies hoping to get rid of experienced technical people because LLMs (and we will have to pick up the slack after the hype)
      • The lack of education in the population (and even among tech people) about how LLMs work, their limits and their usages…

    LLMs are at the same time impressive (think jump to chat-gpt 4), show the ugliest forms of capitalism (CEOs learning, that every time they say AI the stock price goes 5% up), helpful (generate short pieces of code, translate other languages), annoying (generated content) and even dangerous (companies with the money can now literally and automatically flood the internet/news/media with more bullshit and faster).

    • doctorschlotkin@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 hours ago

      Everything you said is great except for the rock metaphor. It’s more akin to a gun in that it’s a tool made by man that has the capacity to do incredible damage and already has on a social level.

      Guns ain’t just laying around on the ground, nor are LLMs. Rocks however, are, like, it’s practically their job.

      • BestBouclettes@jlai.lu
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        14 hours ago

        LLMs and generative AI will do what social media did to us, but a thousand times worse. All that plus the nightmarish capacity of pattern matching at an industrial scale. Inequalities, repression, oppression, disinformation , propaganda and corruption will skyrocket because of it. It’s genuinely terrifying.