You’ll get blindsided real quick. AIs are just getting better. OpenAI are already saying they moved past GPT for their next models. It’s not 5 years before it can fix code longer than 400 lines, and not 20 before it can digest a specification and spout a working software. Said software might not be optimized or pretty, but those are things people can work separately. Where you needed 20 software engineers, you’ll need 10, then 5, then 1-2.
You have more in common with the guy getting replaced today than you care to admit in your comment.
i didn’t downvote you, regardless internet points don’t matter.
you’re not wrong, and i largely agree with what you’ve said, because i didn’t actually say a lot of the things your comment assumes.
the most efficient way i can describe what i mean is this:
LLMs (this is NOT AI) can, and will, replace more and more of us. however, there will never, ever be a time where there will be no human overseeing it because we design software for humans (generally), not for machines. this requires integral human knowledge, assumptions, intuition, etc.
I disagree. When I was studying AI at college 20+ years ago we were also talking about expert systems which are glorified if/else chains. Most experts in the field agree that those systems can also be considered AI (not ML though).
You may be thinking of GAI or Universal AI which is different. I am a believer in the singularity (that a machine will be as creative and conscious as a human), but that’s a matter of opinion.
I didn’t downvote you
I was using “you” more towards the people downvoting me, not you directly. You can see the accounts who downvoted/upvoted, btw.
Edit: and I assumed the implication of your comment was that “people who code are safe”, which is a stretch I was answering to. Your comment was ambiguous either way.
Where you needed 20 software engineers, you’ll need 10, then 5, then 1-2.
It’s an open secret that this is already the case. I have seen projects that went on for decades and only required the engineering staff they had because corporate bureaucracy and risk aversion makes everyone a fraction as effective as they could be, and, frankly, because a lot of ineffective morons got into software development because of the $$$ they could make.
Unless AI somehow eliminates corporate overhead I don’t understand how it’ll possibly make commercial development monumentally easier.
Yeah people think AI is what sci-fi movies sold them. Hyper intelligent - hyper aware sentient beings capable of love or blah blah blah. We’ll get there, but corps don’t need that. If fact that’s the part they don’t want. They need a mindless drone to replace the 80% of their workers doing brainless jobs.
I’ve worked office jobs at a few large corporations. I’ve noticed they like to lay off a department, see how long the other departments can get by splitting up the work, then when everything is on fire they open up hiring. But every now and then… they let go of a department and everything just keeps working. It’s a strategy that seems to work, unfortunately.
You’ll get blindsided real quick. AIs are just getting better. OpenAI are already saying they moved past GPT for their next models. It’s not 5 years before it can fix code longer than 400 lines, and not 20 before it can digest a specification and spout a working software. Said software might not be optimized or pretty, but those are things people can work separately. Where you needed 20 software engineers, you’ll need 10, then 5, then 1-2.
You have more in common with the guy getting replaced today than you care to admit in your comment.
i didn’t downvote you, regardless internet points don’t matter.
you’re not wrong, and i largely agree with what you’ve said, because i didn’t actually say a lot of the things your comment assumes.
the most efficient way i can describe what i mean is this:
LLMs (this is NOT AI) can, and will, replace more and more of us. however, there will never, ever be a time where there will be no human overseeing it because we design software for humans (generally), not for machines. this requires integral human knowledge, assumptions, intuition, etc.
I disagree. When I was studying AI at college 20+ years ago we were also talking about expert systems which are glorified if/else chains. Most experts in the field agree that those systems can also be considered AI (not ML though).
You may be thinking of GAI or Universal AI which is different. I am a believer in the singularity (that a machine will be as creative and conscious as a human), but that’s a matter of opinion.
I was using “you” more towards the people downvoting me, not you directly. You can see the accounts who downvoted/upvoted, btw.
Edit: and I assumed the implication of your comment was that “people who code are safe”, which is a stretch I was answering to. Your comment was ambiguous either way.
jesus christ you should be shoved into a locker
Wow. Thanks for the advice. I guess that’s just Lemmy showing me the door. Good luck with your community here.
Try not to let the bot hurt your feelings, it was trained on cunts ‘n’ assholes
Yikes
It’s an open secret that this is already the case. I have seen projects that went on for decades and only required the engineering staff they had because corporate bureaucracy and risk aversion makes everyone a fraction as effective as they could be, and, frankly, because a lot of ineffective morons got into software development because of the $$$ they could make.
Unless AI somehow eliminates corporate overhead I don’t understand how it’ll possibly make commercial development monumentally easier.
Yeah people think AI is what sci-fi movies sold them. Hyper intelligent - hyper aware sentient beings capable of love or blah blah blah. We’ll get there, but corps don’t need that. If fact that’s the part they don’t want. They need a mindless drone to replace the 80% of their workers doing brainless jobs.
Yeah the problem there is that they don’t know their own staff enough to know who are the people doing brainless jobs.
I’ve worked office jobs at a few large corporations. I’ve noticed they like to lay off a department, see how long the other departments can get by splitting up the work, then when everything is on fire they open up hiring. But every now and then… they let go of a department and everything just keeps working. It’s a strategy that seems to work, unfortunately.