Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can’t understand is the level of acrimony toward LLMs. I see things like “stochastic parrot”, “glorified autocomplete”, etc. If you need an example, the comments section for the post on Apple saying LLMs don’t reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn’t expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
Ethics and morality does it for me. It is insane to steal the works of millions and re-sell it in a black box.
The quality is lacking. Literally hallucinates garbage information and lies, which scammers now weaponize (see Slopsquatting).
Extreme energy costs and environmental damage. We could supply millions of poor with electricity yet we decided a sloppy AI which cant even count letters in a word was a better use case.
The AI developers themselves dont fully understand how it works or why it responds with certain things. Thus proving there cant be any guarantees for quality or safety of AI responses yet.
Laws, juridical systems and regulations are way behind, we dont have laws that can properly handle the usage or integration of AI yet.
Do note: LLM as a technology is fascinating. AI as a tool become fantastic. But now is not the time.