• invertedspear@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Exactly, the jr dev that could write anything useful is a rare gem. Boot camps cranking out jr dev by the dozens every couple of months didn’t help the issue. Talent needs cultivation, and since every tech company has been cutting back lately, they stopped cultivating and started sniping talent from each other. Not hard given the amount of layoffs lately. So now we have jr devs either unable to find a place to refine them, or getting hired by people who just want to save money and don’t know that you need a senior or two to wrangle them. Then chat gpt comes along and gives the illusion of sr dev advice, telling them how to write the wrong thing better, no one to teach them which tool is the right one for the job.

      Our industry is in kind of a fucked state and will be for a while. Get good at cleaning up the messes that will be left behind and that will keep you fed for the next decade.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Not that this is very unique to the field, junior anything usually needs at least 6 months to get to a productive level.

        • invertedspear@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 months ago

          Kind of wish we went with more tradesmen-like titles. Apprentice, journeyman, master. Master software developer sounds like we have honed our craft. Junior/senior just seems like a length of time.

          • TXL@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            It generally is a length of time. Your title depends on the years on the job.

  • rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    One can classify approaches to progress in at least four most popular ways:

    The most dumb clueless jerks think that it’s replacing something known with something known and better. Progress enthusiasts, not knowing a single thing from areas they are enthusiastic about, are usually here.

    The careful and kinda intellectually limited people think that it’s replacing something known with something unknown. They can sour the mood, but are generally safe for those around them.

    The idealistic idiots think that it’s replacing something unknown with something known, that’s “order bringers” and revolutionaries. Everybody knows how revolutionaries do things, who doesn’t can look at Musk and DOGE.

    The only sane kind think that it’s replacing something unknown with something unknown. That is, that when replacing one thing with another thing you are breaking not only what you could see and have listed for replacement. Because nature doesn’t fscking care what you want to see.

    • actaastron@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      I honestly don’t know how anyone’s been able to code anything predominantly using AI that’s production worthy.

      Maybe it’s the way I’m using AI, and to be honest I’ve only used chatGPT so far, but if I ask it to generate a bit of code then ask it to build on it and do the next thing, by about the third or fourth iteration it’s forgotten half of what we talked about and missed out bits of code.

      On a number of occasions it’s given me a solution and when I questions it about the accuracy of it and why a bit of it probably won’t work I just get oh yes let me adjust that for you.

      Maybe I’m doing AI wrong I don’t know, but quite frankly I’ll stick with stack overflow thanks.

      • jonne@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        It’s only useful for stuff that’s been done a million times before in my experience. As soon as you do anything outside of that, it just stays hallucinating.

        It’s basically like how junior devs used to go to stack overflow, grabbed whatever code looked like it would work and just plopped it in the codebase.

        • AntY@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          This is exactly right. AI can only interpolate between datapoints. I used to write code for research papers and chat gpt couldn’t understand a thing I asked of it.

        • Jesus_666@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I remember talking to someone about where LLMs are and aren’t useful. I pointed out that LLMs would be absolutely worthless for me as my work mostly consists of interacting with company-internal APIs, which the LLM obviously hasn’t been trained on.

          The other person insisted that that is exactly what LLMs are great at. They wouldn’t explain how exactly the LLM was supposed to know how my company’s internal software, which is a trade secret, is structured.

          But hey, I figured I’d give it a go. So I fired up a local Llama 3.1 instance and asked it how to set up a local copy of ASDIS, one such internal system (name and details changed to protect the innocent). And Llama did give me instructions… on how to write the American States Data Information System, a Python frontend for a single MySQL table containing basic information about the member states of the USA.

          Oddly enough, that’s not what my company’s ASDIS is. It’s almost as if the LLM had no idea what I was talking about. Words fail to express my surprise at this turn of events.

          • jonne@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            Yeah, and the way it will confidently give you a wrong answer instead of either asking for more information or saying it just doesn’t know is equally annoying.

            • Jesus_666@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              Because giving answers is not a LLM’s job. A LLM’s job is to generate text that looks like an answer. And we then try to coax framework that into generating correct answers as often as possible, with mixed results.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I frankly only used those to generate pictures and sometimes helloworlds for a few languages, which didn’t work and didn’t seem to make sense. It was long enough ago.

        Also I have ASD, so it’s hard enough for me to make consistent clear sense from something small. A machine-generated junk to give ideas is the last thing I need, my thought process is different.

      • Jackinopolis@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        You have to aggressively purge the current chat and give it more abstract references for context. With enough context it can rewrite some logic loops, maybe start a design pattern. You just have to aggressively check the changes.

  • froggycar360@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I could barely code when I landed my job and now I’m a senior dev. It’s saying a plumber’s apprentice can’t plumb - you learn on the job.

      • froggycar360@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        That’s true, it can only get you so far. I’m sure we all started by Frankenstein-ing stack overflow answers together until we had to actually learn the “why”

      • Mr_Dr_Oink@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        100% agree.

        I dont think there is no place for AI as an aid to help you find the solution, but i dont think it’s going to help you learn if you just ask it for the answers.

        For example, yesterday, i was trying to find out why a policy map on a cisco switch wasn’t re-activating after my radius server came back up. Instead of throwing my map at the AI and asking whats wrong l, i asked it details about how a policy map is activated, and about what mechanism the switch uses to determine the status of the radius server and how a policy map can leverage that to kick into gear again.

        Ultimately, AI didn’t have the answer, but it put me on the right track, and i believe i solved the issue. It seems that the switch didnt count me adding the radius server to the running config as a server coming back alive but if i put in a fake server and instead altered the IP to a real server then the switch saw this as the server coming back alive and authentication started again.

        In fact, some of the info it gave me along the way was wrong. Like when it tried to give me cli commands that i already knew wouldn’t work because i was using the newer C3PL AAA commands, but it was mixing them up with the legacy commands and combining them together. Even after i told it that was a made-up command and why it wouldn’t work, it still tried to give me the command again later.

        So, i dont think it’s a good tool for producing actual work, but it can be a good tool to help us learn things if it is used that way. To ask “why” and “how” instead of “what.”

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I could have been a junior dev that could code. I learned to do it before ChatGPT. I just never got the job.

  • Paulo Laureano@lemmy.plaureano.nohost.me
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Of course they don’t. Hiring junior devs for their hard skills is a dumb proposition. Hire for their soft skills, intellectual curiosity, and willingness to work hard and learn. There is no substitute for good training and experience.

  • zerofk@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    As someone who has interviewed candidates for developer jobs for over a decade: this sounds like “in my day everything was better”.

    Yes, there are plenty of candidates who can’t explain the piece of code they copied from Copilot. But guess what? A few years ago there were plenty of candidates who couldn’t explain the code they copied from StackOverflow. And before that, there were those who failed at the basic programming test we gave them.

    We don’t hire those people. We hire the ones who use the tools at their disposal and also show they understand what they’re doing. The tools change, the requirements do not.

    • filister@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      But how do you find those people solely based on a short interview, where they can use AI tools to perform better if the interview is not held in person?

      And mind you the SO was better because you needed to read a lot of answers there and try to understand what would work in your particular case. Learn how to ask smartly. Do your homework and explain the question properly so as not to get gaslit, etc. this is all now gone.

    • uranibaba@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.

      While the requirements never changed, the tools sure did and they made it a lot easier to not understand.

      • major_jellyfish@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Have you actually found that to be the case in anything complex though? I find it just forgets parts to generate something. Stuck in an infuriating loop of fucking up.

        It took us around 2 hours to run our coding questions through chatgpt and see what it gives. And it gives complete shit for most of them. One or two questions we had to replace.

        If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

        And then you get people like OP, blaming the generation while if anything its them and their company to blame… for falling behind. Got to keep up folks. Our field moves fast.

        • xavier666@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          My rule of thumb: Use ChatGPT for questions whos answer I already know.

          Otherwise it hallucinates and tries hard in convincing me of a wrong answer.

        • uranibaba@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I’m to lazy to look search. I say sometimes because othertimes it is just dead wrong.

          All code I ask ChatGPT to write is usually along the lines for “I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is” and then I work with the code it gives me from there. I never take it at face value.

          Have you actually found that to be the case in anything complex though?

          I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.

          If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.

          I agree with you on that.

  • socsa@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    This isn’t a new thing. Dilution of “programmer” and “computer” education has been going on for a long time. Everyone with an IT certificate is an engineer th se days.

    For millennials, a “dev” was pretty much anyone with reasonable intelligence who wanted to write code - it is actually very easy to learn the basics and fake your way into it with no formal education. Now we are even moving on from that to where a “dev” is anyone who can use an AI. “Prompt Engineering.”

  • Oniononon@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Im in uni learning to code right now but since I’m a boomer i only spin up oligarch bots every once in a while to check for an issue that I would have to ask the teacher. It’s far more important for me to understand fundies than it is to get a working program. But that is only because ive gotten good at many other skills and realize that fundies are fundamental for a reason.

  • barsoap@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Not in any way a new phenomenon, there’s a reason fizzbuzz was invented, there’s been a steady stream of CS graduates who can’t code their way out of a wet paper bag ever since the profession hit the mainstream.

    Actually fucking interview your candidates, especially if you’re sourcing candidates from a country with for-profit education and/or rote learning cultures, both of which suck when it comes to failing people who didn’t learn anything. No BS coding tests go for “explain this code to me” kind of stuff, worst case they can understand code but suck at producing it, that’s still prime QA material right there.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      We do two “code challenges”:

      1. Very simple, many are done in 5 min; this just weeds out the incompetent applicants, and 90% of the code is written (i.e. simulate working in an existing codebase)
      2. Ambiguous requirements, the point is to ask questions, and we actually have different branches depending on assumptions they made (to challenge their assumptions); i.e. simulate building a solution with product team

      The first is in the first round, the second is in the technical interview. Neither are difficult, and we provide any equations they’ll need.

      It’s much more important that they can reason about requirements than code something quick, because life won’t give you firm requirements, and we don’t want a ton of back and forth with product team if we can avoid it, so we need to catch most of that at the start.

      In short, we’re looking for actual software engineers, not code monkeys.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Those are good approaches, I would note that the “90% is written” one is mostly about code comprehension, not writing (as in: Actually architect something), and the requirement thing is a thing that you should, IMO, learn as a junior, it’s not a prerequisite. It needs a lot of experience, and often domain knowledge new candidates have no chance of having. But, then, throwing such stuff at them and then judging them by their approach, not end result, should be fair.

        The main question I ask myself, in general, is “can this person look at code from different angles”. Somewhat like rotating a cube in your mind’s eye if you get what I mean. And it might even be that they’re no good at it, but they demonstrate the ability when talking about coffee making. People who don’t get lost when you’re talking about cash registers having a common queue having better overall latency than cash registers with individual queues. Just as a carpenter would ask someone “do you like working with your hands”, the question is “do you like to rotate implication structures in your mind”.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          judging them by their approach, not end result, should be fair.

          Yup, that’s the approach. It’s okay if they don’t finish, I want to know how they approach the problem. We absolutely adjust our decision based on the role.

          If they can extend existing code and design a new system (with minimal new code) and ask the right questions, we can work with them.

          • r3g3n3x@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            I’m just getting started on my third attempt at changing careers from sys-admining over to coding (starting with the Odin project this time). I’m not sure the questions you ask, while interesting, will be covered. Can you point to some resources or subject matter to research to get exposure to these questions? The non coding, coding questions are interesting to me and I’m curious if my experience will help or if it’s something I need to account for while learning.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              We stay away from riddles, and instead focus on CS concepts. We’ll rephrase to avoid jargon if you don’t have a formal education, or it has been a while. Here are a few categories:

              • OOP concepts like SOLID
              • concurrency vs parallelism, approaches for each (generators, threads, async,’ etc), and tradeoffs
              • typing (e.g. is a Python strongly or weakly typed? Java? JavaScript?), and practical implications
              • functional programming concepts like closures, partial application, etc
              • SQL knowledge
              • types of tests, and approaches/goals for each

              And some practical details like:

              • major implementation details of our stack (Python’s GIL, browser features like service workers, etc)
              • git and docker experience
              • build systems and other dev tools

              That covers most of it. We don’t expect every candidate to know everything, we just want to get an idea of the breadth and depth of their knowledge.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          We’re a somewhat big player in a niche industry that manufactures for a large industry. Yearly profits are in the hundreds of millions of dollars, market cap is a few billion, so low end of mid cap stocks. I don’t want to doxx myself, but think of something like producing drills for oil rigs and you won’t be far off.

          We have about 50 software developers across three time zones (7 or 8 scrum teams) and a pretty high requirement for correctness and very little emphasis on rapid delivery. It’s okay if it takes more time, as long as can plan around it, so we end up with estimates like 2-3 months for things that could have an MVP in under a month (in fact, we often build an MVP during estimation), with the extra time spent testing.

          So yeah, it’s a nice place to work. I very rarely stay late, and it’s never because a project is late, but because of a high severity bug in prod (e.g. a customer can’t complete a task).

  • RamenJunkie@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I am not a professional coder, just a hobbyist, but I am increasingly digging into Cybersecurity concepts.

    And even as an “amature Cybersecurity” person, everything about what you describe, and LLM coders, terrifies me, because that shit is never going to have any proper security methodology implemented.

  • Jack@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Very “back in my day” energy.

    I do not support AI but programming is about solving problems and not writing code.

    If we are concentrating on tool, no developers and use punched card as well. Is that a bad thing?

    • maniclucky@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      You’re right in that the goal is problem solving, you’re wrong that inability to code isn’t a problem.

      AI can make a for loop and do common tasks but the moment you have something halfway novel to do, it has a habit of shitting itself and pretending that the feces is good code. And if you can’t read code, you can’t tell the shit from the stuff you want.

      It may be able to do it in the future but it can’t yet

      Source: data engineer who has fought his AI a time or two.

      • Jack@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Of course I use as well on a daily basis for coding and AI is shit.

        Again, I in no way support AI, I just think that the argument made in the article is also not good.

  • 7fb2adfb45bafcc01c80@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    To me, I feel like this is a problem perpetuated by management. I see it on the system administration side as well – they don’t care if people understand why a tool works; they just want someone who can run it. If there’s no free thought the people are interchangeable and easily replaced.

    I often see it farmed out to vendors when actual thought is required, and it’s maddening.

    • icmpecho@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      i always found this to be upsetting as an IT tech at a former company - when a network or server had an issue and i was sent to resolve it, it was a “just reboot it” fix, which never kept the problem from recurring and bringing the server down at 07:00 the next Monday.

      the limitations on the questions i could ask hurt that SLA more than any network switch’s memory leak ever did, and i felt as if my expertise meant nothing as a result.

  • nexguy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Stack Overflow and Google were once the “AI” of the previous generation. “These kids can’t code, they just copy what others have done”

    • lightnsfw@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      As someone who can’t code (not a developer) but occasionally needs to dip my toes in it. I’ve learned quite a bit from using chatgpt and then picking apart whatever it shat out to figure out why it’s not working. It’s still better than me starting from scratch on whatever it is I’m working on because usually I don’t even know where to begin.

      • embed_me@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        And when copy-pasting didn’t work, those who dared to rise above and understand it, became better. Same with AI, those of the new generation who see through the slop will learn. It’s the same as it has always been. Software engineering is more accessible than ever, say what you will about the current landscape of software engineering but that fact remains undeniable.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I’m glad that AI is making it easier to enter into new areas of knowledge. I just hope it won’t be used as a crutch too far into people’s journeys.

        • uranibaba@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 months ago

          Software engineering is more accessible than ever

          This is key here. Having it more accessible, we see more people who do not want to learn but still trying to code. But we also see more people who wants to learn and create solutions.

        • λλλ@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Well said. Some of the most talented devs I know use Stack Overflow. It depends on how you use it.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I’m glad that AI is making it easier to enter into new areas of knowledge. I just hope it won’t be used as a crutch too far into people’s journeys.