By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    For small boilerplate or very common small pieces of code, for instance a famous algorithm implementation. Yes. As they are just probably giving you the top stack overflow answer for a classic question.

    Anything that the LLM would need to mix or refactor would be terrible.

  • cley_faye@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    For repetitive tasks, it can almost automatically get a first template you write by hand, and extrapolate with multiple variations.

    Beyond that… not really. Anything beyond single line completion quickly devolves into either something messy, non working, or worse, working but not as intended. For extremely common cases it will work fine; but extremely common cases are either moved out in shared code, or take less time to write than to “generate” and check.

    I’ve been using code completion/suggestion on the regular, and it had times where I was pleasantly surprised by what it produced, but even for these I had to look after it and fix some things. And while I can’t quantify how often it happened, there are a lot of times where it’s convincing gibberish.

    • A_A@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Yes … and it doesn’t know when it is on time.
      Also, machines are getting better and they can help us with inspiration.

  • recapitated@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I’m my experience they do a decent job of whipping out mindless minutea and things that are well known patterns in very popular languages.

    They do not solve problems.

    I think for an “AI” product to be truly useful at writing code it would need to incorporate the LLM as a mere component, with something facilitating checks through static analysis and maybe some other technologies, maybe even mulling the result through a loop over the components until they’re all satisfied before finally delivering it to the user as a proposal.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      It’s a decent starting point for a new language. I had to learn webdev as an embedded C coder, and using a LLM and cross-referencing the official documentation makes a new language much more approachable.

      • recapitated@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        I agree, LLMs have been helpful in pointing me in the right direction and helping me rethink what questions I actually want to ask in disciplines I’m not very familiar with.

    • thanks_shakey_snake@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Those kinds of patterns are already emerging! That “mulling the result through a loop” step is called “reflection,” and it does a great job of catching mistakes and hallucinations. Nothing is on the scale of doing the whole problem-solving and implementation from business requirements to deployed product-- probably never will be, IMO-- but this “making the LLM a component in a broader system with diverse tools” is definitely something that we’re currently figuring out patterns for.

  • Jimmycrackcrack@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I don’t know how to program, but to a very limited extent can sorta kinda almost understand the logic of very short and simplistic code that’s been written for me by someone who can actually code. I tried to get to get chat GPT to write a shell script for me to work as part of an Apple shortcut. It has no idea. It was useless and ridiculously inconsistent and forgetful. It was the first and only time I used chat GPT. Not very impressed.

    Given how it is smart enough to produce output that’s kind of in the area of correct, albeit still wrong and logically flawed, I would guess it could eventually be carefully prodded into making one small snippet of something someone might call “good” but at that point I feel like that’s much more an accident in the same way that someone who has memorised a lot of French vocabulary but never actually learned French might accidentally produce a coherent sentence once in a while by trying and failing 50 times before and failing again immediately after without ever having known.

  • Septimaeus@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Theoretically, I would say yes it’s possible, insofar as we could break down most subtasks of the development process into training parameters. But we are a long way from that currently.

    • finestnothing@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      My CTO thoroughly believes that within 4-6 years we will no longer need to know how to read or write code, just how to ask an AI to do it. Coincidentally, he also doesn’t code anymore and hasn’t for over 15 years.

      • Bilb!@lem.monster
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I think he’s correct and there’s a ton of cope going on on lemmy right now. I also think tons of art/graphic design jobs will disappear never to return.

      • recapitated@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        From a business perspective, no shareholder cares at how good an employee is at personally achieving a high degree of skill. They only care about selling and earning, and to a lesser degree an enduring reputation for longer term earnings.

        Economics could very well drive this forward. But I don’t think the craft will be lost. People will need to supervise this progress as well as collaborate with the machines to extend its capabilities and dictate its purposes.

        I couldn’t tell you if we’re talking on a time scale of months or decades, but I do think “we” will get there.

        • whyrat@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Hackers and hobbiests will persist despite any economics. Much of what they do I don’t see AI replacing, as AI creates based off of what it “knows”, which is mostly things it has previously ingested.

          We are not (yet?) at the point where LLM does anything other than put together code snippets it’s seen or derived. If you ask it to find a new attack vector or code dissimilar to something it’s seen before the results are poor.

          But the counterpoint every developer needs to keep in mind: AI will only get better. It’s not going to lose any of the current capabilities to generate code, and very likely will continue to expand on what it can accomplish. It’d be naive to assume it can never achieve these new capabilities… The question is just when & how much it costs (in terms of processing and storage).

          • recapitated@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Agree, and the point I always want to make is that any LLM or neural net or any other AI tech is going to be a mere component in a powerful product, rather than the entirety of the product.

            The way I think of it is that my brain is of little value without my body, and my person is of little value without my team at work. I don’t exist in a vacuum but I can be highly productive within my environment.

    • SolOrion@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      That’s some 40k shit.

      “What does it mean?” “I do not know, but it appeases the machine spirit. Quickly, recite the canticles.”

      • RebekahWSD@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        This is directly how we’re getting to a 40k future and I hate it. The bad future!

        If we must I might join the Mechanicus though. I’m good at chanting and doing things by rote.

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Don’t be, there will come a time when nearly all code is AI created, and not human readable.

      You need to worry for the future when big data sites are running code they literally don’t know how it works and have no way to verify because of how cheap and relatively effective it is.

      Then after that LLMs will get better at coding than any human can achieve, but will still be black box human unreadable code but there will be no chain of discipline left to teach new programmers.

      Hardly anyone is taking this seriously because corporations stand to make a fucktonne of money and normal people are in general clueless about complex subjects that require a nuanced understanding, yet strangely arrogant about their ignorant opinions based on movies and their drinking buddies malformed opinions.

  • edgemaster72@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    understanding what the machine spits out

    This is exactly why people will still need to learn to code. It might write good code, but until it can write perfect code every time, people should still know enough to check and correct the mistakes.

    • visor841@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      For a very long time people will also still need to understand what they are asking the machine to do. If you tell it to write code for an impossible concept, it can’t make it. If you ask it to write code to do something incredibly inefficiently, it’s going to give you code that is incredibly inefficient.

      • 667@lemmy.radio
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I used an LLM to write some code I knew I could write, but was a little lazy to do. Coding is not my trade, but I did learn Python during the pandemic. Had I not known to code, I would not have been able to direct the LLM to make the required corrections.

        In the end, I got decent code that worked for the purpose I needed.

        I still didn’t write any docstrings or comments.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          I would not trust the current batch of LLMs to write proper docstrings and comments, as the code it is trained on does not have proper docstrings and comments.

          And this means that it isn’t writing professional code.

          It’s great for quickly generating useful and testable code snippets though.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            It can absolutely write a docstring for a provided function. That and unit tests are like some of the easiest things for it, because it has the source code to work from

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I’ve even seen human engineers’ code thrown out because no one else could understand it. Back in the day, one webdev took it upon himself to whip up a mobile version of our company’s very complex website. He did it as a side project. It worked. It was complete. It looked good. It was very fast. The code was completely unreadable by anyone else. We didn’t use it.

  • saltesc@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    In my experience, not at all. But sometimes they help with creativity when you hit a wall or challenge you can’t resolve.

    They have been trained off internet examples where everyone has a different style/method of coding, like writing style. It’s all very messy and very unreliable. It will be years for LLMs to code “good” and will require a lot of training that isn’t scraping.

  • Emily (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you’re writing ui code, makes very strange decisions (since it has no special/visual reasoning).

    Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can’t do that, and only you can (and you can’t skip learning to code to just get on to architecture and patterns)

      • Em Adespoton@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        The other thing is, an LLM generally knows about all the existing libraries and what they contain. I don’t. So while I could code a pretty good program in a few days from first principles, an LLM is often able to stitch together some elegant glue code using a collection of existing library functions in seconds.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I think this is the best response in this thread.

      Software engineering is a lot more than just writing some lines of code and requires more thought and planning than can be realistically put into a prompt.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Yes and no. GPT usually gives me clever solutions I wouldn’t have thought of. Very often GPT also screws up, and I need to fine tune variable names, function parameters and such.

    I think the best thing about GPTis that it knows the documentation of every function, so I can ask technical questions. For example, can this function really handle dataframes, or will it internally convert the variable into a matrix and then spit out a dataframe as if nothing happened? Such conversions tend to screw up the data, which explains some strange errors I bump into. You could read all of the documentation to find out, or you could just ask GPT about it. Alternatively, you could show how badly the data got screwed up after a particular function, and GPT would tell that it’s because this function uses matrices internally, even though it looks like it works with dataframes.

    I think of GPT as an assistant painter some famous artists had. The artist tells the assistant to paint the boring trees in the background and the rough shape of the main subject. Once that’s done, the artist can work on the fine details, sign the painting, send it to the local king and charge a thousand gold coins.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Writing code is probably one of the few things LLMs actually excell at. Few people want to program something nobody has ever done before. Most people are just reimplimenting the same things over and over with small modifications for their use case. If imports of generic code someone else wrote make up 90% of your project, what’s the difference in getting an LLM to write 90% of your code?

    • etchinghillside@reddthat.com
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Yes, two six-sided dice (2d6) are capable of rolling a sum of 9. Here are the possible combinations that would give a total of 9:

      • 3 + 6
      • 4 + 5
      • 5 + 4
      • 6 + 3

      So, there are four different combinations that result in a roll of 9.

      See? LLMs can do everything!

        • Fonzie!@ttrpg.network
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          3 months ago

          I asked four LLM-based chatbots over DuckDuckGo’s anonymised service the following:

          “How many r’s are there in Strawberry?”


          GPT-4o mini

          There are three “r’s” in the word “strawberry.”

          Claude 3 Haiku

          There are 3 r’s in the word “Strawberry”.

          Llama 3.1 70B

          There are 2 r’s in the word “Strawberry”.

          Mixtral 8x7B

          There are 2 “r” letters in the word “Strawberry”. Would you like to know more about the privacy features of this service?


          They got worse at the end, but at least GPT and Claude can count letters.

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Sorry, I wasn’t trying to berate you. Just trying to illustrate the underlying assumption of your question

      • etchinghillside@reddthat.com
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Wouldn’t exactly take the comment as negative.

        The output of current LLMs is hit or miss sometimes. And when it misses you might find yourself in a long chain of persuading a sassy robot into writing things as you might intend.

  • Ookami38@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Of course it can. It can also spit out trash. AI, as it exists today, isn’t meant to be autonomous, simply ask it for something and it spits it out. They’re meant to work with a human on a task. Assuming you have an understanding of what you’re trying to do, an AI can probably provide you with a pretty decent starting point. It tends to be good at analyzing existing code, as well, so pasting your code into gpt and asking it why it’s doing a thing usually works pretty well.

    AI is another tool. Professionals will get more use out of it than laymen. Professionals know enough to phrase requests that are within the scope of the AI. They tend to know how the language works, and thus can review what the AI outputs. A layman can use AI to great effect, but will run into problems as they start butting up against their own limited knowledge.

    So yeah, I think AI can make some good code, supervised by a human who understands the code. As it exists now, AI requires human steering to be useful.