Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can’t understand is the level of acrimony toward LLMs. I see things like “stochastic parrot”, “glorified autocomplete”, etc. If you need an example, the comments section for the post on Apple saying LLMs don’t reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn’t expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
I feel like it’s more the sudden overnight hype about it rather than the technology itself. CEOs all around the world suddenly went “you all must use AI and shoe horn it into our product!”. People are fatigued about constantly hearing about it.
But I think people, especially devs, don’t like big changes (me included), which causes anxiety and then backlash. LLMs have caused quite a big change with the way we go about our day jobs. It’s been such a big change that people are likely worried about what their career will look like in 5 or 10 years.
Personally I find it useful as a pairing buddy, it can generate some of the boilerplate bullshit and help you through problems, which might have taken longer to understand by trawling through various sites.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.
The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people’s (or worse, AI’s) code is orders of magnitude harder than writing the same code yourself.