Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

One thing I can’t understand is the level of acrimony toward LLMs. I see things like “stochastic parrot”, “glorified autocomplete”, etc. If you need an example, the comments section for the post on Apple saying LLMs don’t reason is a doozy of angry people: https://infosec.pub/post/29574988

While I didn’t expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    I doubt most LLM haters have spent that much time thinking about it deeply. It’s part of the package deal you must subscribe to if you want to belong to the group. If you spend time in spaces where the haters are loud and everyone else stays quiet out of fear of backlash, it’s only natural to start feeling like everyone must think this way - so it must be true, and therefore I think this way too.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    Emotional? No. Rational.

    Use of Ai is showing as a bad idea for so many reasons that have been raised by people who study this kind of thing. There’s nothing I can tell you that has any more validity than the experts’ opinions. Go see.

  • Rose@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    I’m not opposed to AI research in general and LLMs and whatever in principle. This stuff has plenty of legitimate use-cases.

    My criticism comes in three parts:

    1. The society is not equipped to deal with this stuff. Generative AI was really nice when everyone could immediately tell what was generated and what was not. But when it got better, it turns out people’s critical thinking skills go right out of the window. We as a society started using generative AI for utter bullshit. It’s making normal life weirder in ways we could hardly imagine. It would do us all a great deal of good if we took a short break from this and asked what the hell are we even doing here and maybe if some new laws would do any good.

    2. A lot of AI stuff purports to be openly accessible research software released as open source, and stuff is published in scientific journals. But they often have weird restrictions that fly in the face of open source definition (like how some AI models are “open source” but have a cap on users, which makes it non-open by definition). Most importantly, this research stuff is not easily replicable. It’s done by companies with ridiculous amount of hardware and they shift petabytes of data which they refuse to reveal because it’s a trade secret. If it’s not replicable, its scientific value is a little bit in question.

    3. The AI business is rotten to the core. AI businesses like to pretend they’re altruistic innovators who take us to the Future. They’re a bunch of hypemen, slapping barely functioning components together to try to come up with Solutions to problems that aren’t even problems. Usually to replace human workers, in a way that everyone hates. Nothing must stand in their way - not copyright, no rules of user conduct, not social or environmental impact they’re creating. If you try to apply even a little bit of reasonable regulation to this - “hey, maybe you should stop downloading our entire site every 5 minutes, we only update it, like, monthly, and, by the way, we never gave you a permission to use this for AI training” - they immediately whinge about how you’re impeding the great march of human progress or someshit.

    And I’m not worried about AI replacing software engineers. That is ultimately an ancient problem - software engineers come up with something that helps them, biz bros say “this is so easy to use that I can just make my programs myself, looks like I don’t need you any more, you’re fired, bye”, and a year later, the biz bros come back and say “this software that I built is a pile of hellish garbage, please come back and fix this, I’ll pay triple”. This is just Visual Basic for Applications all over again.

  • ada@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    22 days ago

    It’s a hugely disruptive technology, that is harmful to the environment, being taken up and given center stage by a host of folk who don’t understand it.

    Like the industrial revolution, it has the chance to change the world in a massive way, but in doing so, it’s going to fuck over a lot of people, and notch up greenhouse gas output. In a decade or two, we probably won’t remember what life was like without them, but lots of people are going to be out of jobs, have their income streams cut off and have no alternatives available to them whilst that happens.

    And whilst all of that is going on, we’re getting told that it’s the best most amazing thing that we all need, and it’s being stuck in to everything, including things that don’t benefit from the presence of an LLM, and sometimes, where the presence of an LLM can be actively harmful

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      22 days ago

      I am not an AI hater, it helps me automate many of the more mundane tasks of my job or the things I don’t ever have time for.

      I also feel that change management is a big factor with any paradigm shifting technology, as is with LLMs. I recall when some people said that both the PC and the internet were going to be just a fad.

      Nonetheless, all the reasons you’ve mentioned are the same ones that give me concern about AI.

  • Christopher@lemmy.grey.fail
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    I think a lot of it is anxiety; being replaced by AI, the continued enshitification of the services I loved, and the ever present notion that AI is, “the answer.” After a while, it gets old and that anxiety mixes in with annoyance – a perfect cocktail of animosity.

    And AI stole em dashes from me, but that’s a me-problem.

  • Fontasia@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    I know there’s people who could articulate it better than I can, but my logic goes like this:

    • Loss of critical thinking skill: This doesn’t just go for someone working on a software project that they don’t really care about. Lots of coders start in their bedroom with notepad and some curiosity. If copilot interrupts you with mediocre but working code, you never get the chance to learn ways of solving a problem for yourself.
    • Style: code spat out by AI is a very specific style, and no amount of prompt modifiers with come up with the type of someone really going for speed or low memory usage that’s nearly impossible to read but solves for a very specific case.
    • If everyone is a coder, no one is a coder: Of everyone can claim to be a coder on paper, it will be harder to find good good coders. Sure, you can make every applicant do FizzBuzz or a basic sort, but that does not give a good opportunity to show you can actually solve a problem. It will discourage people from becoming coders in the first place. A lot of companies can actually get by with vibe coders (at least for a while) and that dries up the market of the sort of junior positions that people need to get better and promoted to better positions.
    • When the code breaks, it takes a lot longer to understand and rectify when you don’t know how any of it works. When you don’t even bother designing or completing a test plan because Cursor developed a plan, which all came back green, pushed it during a convenient downtime and has archived all the old versions in its own internal logical structure that can’t be easily undone.
    • mbtrhcs@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      I’m an empirical researcher in software engineering and all of the points you’re making are being supported by recent papers on SE and/or education. We are also seeing a strong shift in behavior of our students and a lack of ability to explain or justify their “own” work

  • Brotha_Jaufrey@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    AI becoming much more widespread isn’t because it’s actually that interesting. It’s all manufactured. Forcibly shoved into our faces. And for the negative things AI is capable of, I have an uneasy feeling about all this.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3)

    It’s ironic that you describe your impression of LLMs in emotional terms.

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    My biggest issue is with how AI is being marketed, particularly by Apple. Every single Apple Intelligence commercial is about a mediocre person who is not up to the task in front of them, but asks their iPhone for help and ends up skating by. Their families are happy, their co-workers are impressed, and they learn nothing about how to handle the task on their own the next time except that their phone bailed their lame ass out.

    It seems to be a reflection of our current political climate, though, where expertise is ignored, competence is scorned, and everyone is out for themselves.

  • latenightnoir@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    To me, it’s not the tech itself, it’s the fact that it’s being pushed as something it most definitely isn’t. They’re grifting hard to stuff an incomplete feature down everyone’s throats, while using it to datamine the everloving spit out of us.

    Truth be told, I’m genuinely excited about the concept of AGI, of the potential of what we’re seeing now. I’m also one who believes AGI will ultimately be as a progeny and should be treated as such, as a beong in itself, and while we aren’t capable of generating that, we should still keep it in mind, to mould our R&D to be based on that principle and thought. So, in addition to being disgusted by the current day grift, I’m also deeply disappointed to see these people behaving this way - like madmen and cultists.

    The people who own/drive the development of AI/LLM/what-have-you (the main ones, at least) are the kind of people who would cause the AI apocalypse. That’s my problem.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      I‘ll just say I won‘t grand any machine even the most basic human rights until every last person on the planet has access to enough clean water, food, shelter, adequate education, state of the art health care, peace, democracy and enough freedom to not limit the freedom of others. That‘s the lowest bar and if I can think of other essential things every person on the planet needs I‘ll add them.

      I don‘t want to live in a world where we treat machines like celebrities while we don‘t look after our own. That would be an express ticket towards disaster like we‘ve seen in many science fiction novels before.

      Research towards AGI for AGI’s sake should be strictly prohibited until tech bros figure out how to feed the planet so to speak. Let‘s give them an incentive to use their disruptive powers for something good before they play god.

      • latenightnoir@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        22 days ago

        While I disagree with your hardline stance on prioritisation of rights (I believe any conscious/sentient being should be treated as such at all times, which implies full rights and freedoms), I do agree that we should learn to take care of ourselves before we take on the incomprehensible responsibility of developing AGI, yes.

    • MalReynolds@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that’s how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.

      • latenightnoir@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        22 days ago

        Said it better than I did, this is exactly it!

        Right now, it’s like watching everyone cheer on as the obvious Villain is developing nuclear weapons.

  • thanksforallthefish@literature.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn’t even come in an X86 variant (Lineage that is is, I’m aware android has been ported).

    I smell bovine faeces. Or are you, in fact, an LLM ?

        • CosmoNova@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          22 days ago

          Why won‘t they tell us what they replaced windows with or on what they installed Lineage, though? The more people speculate, the more questions I have.

    • hanke@feddit.nu
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      He dumped windows (for Linux) amd installed LineageOS (on his phone).

      OP likely has two devices.

    • chrisbtoo@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      Calm down. They never said anything about the two things happening on the same device.

  • Dekkia@this.doesnotcut.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    I personally just find it annoying how it’s shoehorned into everyting regardless if it makes sense to be there or not, without the option to turn it off.

    I also don’t find it helpful for most things I do.

  • cabbage@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    We’re outsourcing thinking to a bullshit generator controlled by mostly American mega-corporations who have repeatedly demonstrated that they want to do us harm, burning through scarce resources and rendering creative humans robbed and unemployed in the process.

    What’s not to hate.

  • Epzillon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    Ethics and morality does it for me. It is insane to steal the works of millions and re-sell it in a black box.

    The quality is lacking. Literally hallucinates garbage information and lies, which scammers now weaponize (see Slopsquatting).

    Extreme energy costs and environmental damage. We could supply millions of poor with electricity yet we decided a sloppy AI which cant even count letters in a word was a better use case.

    The AI developers themselves dont fully understand how it works or why it responds with certain things. Thus proving there cant be any guarantees for quality or safety of AI responses yet.

    Laws, juridical systems and regulations are way behind, we dont have laws that can properly handle the usage or integration of AI yet.

    Do note: LLM as a technology is fascinating. AI as a tool become fantastic. But now is not the time.