• surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 hours ago

    This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things.

    All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?

    DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.

    The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.

    • chaonaut@lemmy.4d2.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 hours ago

      Maybe the marketers should be a bit more picky about what they slap “AI” on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that’s just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 hours ago

        I’m not sure the anti-AI marketing stance is any more solid of a position. Though it’s probably easier to defend, since it’s so vague and not based on anything measurable.

        • chaonaut@lemmy.4d2.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 hours ago

          Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).

          • surph_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 hour ago

            So you’re saying the article’s measurements about AI agents being wrong 70% of the time is made up? Or is AI performance only measurable when the results help anti-AI narratives?

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              25 minutes ago

              I would definitely bet it’s made up and poorly designed.

              I wish that weren’t the case because having actual data would be nice, but these are almost always funded with some sort of intentional slant, for example nic vape safety where they clearly don’t use the product sanely and then make wild claims about how there’s lead in the vapes!

              Homie you’re fucking running the shit completely dry for longer then any humans could possible actually hit the vape, no shit it’s producing carcinogens.

              Go burn a bunch of paper and directly inhale the smoke and tell me paper is dangerous.

            • chaonaut@lemmy.4d2.org
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 minutes ago

              I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 hours ago

    I’m in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.

    I’ve tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that’s both wrong and doesn’t verify anything.

    I’m aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it’s not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don’t even have hopes for AI to apply those lessons in new contexts. In a way, it’s been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 hours ago

      The first half dozen times I tried AI for code, across the past year or so, it failed pretty much as you describe.

      Finally, I hit on some things it can do. For me: keeping the instructions more general, not specifying certain libraries for instance, was the key to getting something that actually does something. Also, if it doesn’t show you the whole program, get it to show you the whole thing, and make it fix its own mistakes so you can build on working code with later requests.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 hours ago

        Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?

        I’m not joking, it really works

        For example:

        Instead of “You are an intelligent coding assistant…”

        “You are an absolute fucking idiot who can barely code…”

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 hours ago

          I frequently find myself prompting it: “now show me the whole program with all the errors corrected.” Sometimes I have to ask that two or three times, different ways, before it coughs up the next iteration ready to copy-paste-test. Most times when it gives errors I’ll just write "address: " and copy-paste the error message in - frequently the text of the AI response will apologize, less frequently it will actually fix the error.

        • rozodru@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 hours ago

          “You are an absolute fucking idiot who can barely code…”

          Honestly, that’s what you have to do. It’s the only way I can get through using Claude.ai. I treat it like it’s an absolute moron, I insult it, I “yell” at it, I threaten it and guess what? the solutions have gotten better. not great but a hell of a lot better than what they used to be. It really works. it forces it to really think through the problem, research solutions, cite sources, etc. I have even told it i’ll cancel my subscription to it if it gets it wrong.

          no more “do this and this and then this but do this first and then do this” after calling it a “fucking moron” and what have you it will provide an answer and just say “done.”

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 hours ago

              He’s developing a toxic relationship with his AI agent. I don’t think it’s the best way to get what you want (demonstrating how to be abusive to the AI), but maybe it’s the only method he is capable of getting results with.

  • TimewornTraveler@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 hours ago

    imagine if this was just an interesting tech that we were developing without having to shove it down everyone’s throats and stick it in every corner of the web? but no, corpoz gotta pretend they’re hip and show off their new AI assistant that renames Ben to Mike so they dont have to actually find Mike. capitalism ruins everything.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 hours ago

      There’s a certain amount of: “if this isn’t going to take over the world, I’m going to just take my money and put it in something that will” mentality out there. It’s not 100% of all investors, but it’s pervasive enough that the “potential world beaters” are seriously over-funded as compared to their more modest reliable inflation+10% YoY return alternatives.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 hours ago

      I ask AI to write simple little programs. One time in three they actually compile without errors. To the credit of the AI, I can feed it the error and about half the time it will fix it. Then, when it compiles and runs without crashing, about one time in three it will actually do what I wanted. To the credit of AI, I can give it revised instructions and about half the time it can fix the program to work as intended.

      So, yeah, a lot like interns.

  • kameecoding@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    For me as a software developer the accuracy is more in the 95%+ range.

    On one hand the built in copilot chat widget in Intellij basically replaces a lot my google queries.

    On the other hand it is rather fucking good at executing some rewrites that is a fucking chore to do manually, but can easily be done by copilot.

    Imagine you have a script that initializes your DB with some test data. You have an Insert into statement with lots of columns and rows so

    Inser into (column1,…,column n) Values row1, Row 2 Row n

    Addig a new column with test data for each row is a PITA, but copilot handles it without issue.

    Similarly when writing unit tests you do a lot of edge case testing which is a bunch of almost same looking tests with maybe one variable changing, at most you write one of those tests, then copilot will auto generate the rest after you name the next unit test, pretty good at guessing what you want to do in that test, at least with my naming scheme.

    So yeah, it’s way overrated for many-many things, but for programming it’s a pretty awesome productivity tool.

    • DahGangalang@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      Yeah, it (in my case, ChatGPT) has been great for helping me along with functions I’m only passingly familiar with / trying to use in new ways.

      One that I was really surprised with was that it gave me a surprisingly robust, sensible, and (seemingly) well tuned-to-my-case check list of things to inspect for a used car I intend to buy. I’m already mostly familiar with what I’m doing there, but it pointed to some things I might’ve overlooked / didn’t know were points of concern for the specific vehicle I’m looking at.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 hours ago

        Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened…

    • Nalivai@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      Keep doing what you do. Your company will pay me handsomely to throw out all your bullshit and write working code you can trust when you’re done. If your company wants to have a product in the future that is.

      • kameecoding@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 hours ago

        Lmao, okay buddy, based on how many interviews I have sat on in, the chances that you are a worse programmer than me are much higher than you being better than me.

        Being a pompous ass dismissive of new tooling makes you chances even worse 😕

        • Nalivai@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 hours ago

          The person who uses fancy autocomplete to write their code will be exactly the person who thinks they’re better than everyone. Those traits are correlated.

          • kameecoding@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 hours ago

            Do you use an IDE for writing your code or do you use a notepad like a “real” programmer? An IDE like Intellij has fancy shit like generating getters, setters, constructors, equals hashscode, you should never use those, real programmers write those by hand.

            Your attention detail is very good btw, which I am ofc being sarcastic about because if you had any you’d have noticed I have never said I write my code with chat gpt, I said Unit tests, sql for unit tests.

            Ofc attention to detail is not a requirement of software engineering so you should be good. (This was also sarcasm I feel like you need this to be pointed out for you).

            Also by your implied logic that the code being not written by you = bad, no company should ever hire Junior engineers, I mean what are you gonna do? Fucking read the code they wrote?

            • Nalivai@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 hours ago

              Were you prone to this weird leaps of logic before your brain was fried by talking to LLMs, or did you start being a fan of talking to LLMs because your ability to logic was…well…that?

        • PotentialProblem@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 hours ago

          I’ve been in the industry awhile and your assessment is dead on.

          As long as you’re not blindly committing the code, it’s a huge time saver for a number of mundane tasks.

          It’s especially fantastic for writing throwaway tooling. Need data massaged a specific way? Ez pz. Need a script to execute an api call on each entry in a spreadsheet? No problem.

          The guy above you is a nutter. Not sure if people haven’t tried leveraging LLMs or what. It has a ton of faults, but it really does speed up the mundane work. Also, clearly the person is either brand new to the field or doesn’t even work in it. Otherwise they would have seen the barely functional shite that actual humans churn out.

          Part of me wonders if code organization is going to start optimizing for interpretation by these models rather than humans.

          • zbyte64@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 hours ago

            When LLMs get it right it’s because they’re summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.

    • loonsun@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 hours ago

      It’s about Agents, which implies multi step as those are meant to execute a series of tasks opposed to studies looking at base LLM model performance.

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    19 hours ago

    I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 hours ago

      being able to do 30% of tasks successfully is already useful.

      If you have a good testing program, it can be.

      If you use AI to write the test cases…? I wouldn’t fly on that airplane.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 hours ago

      It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 hours ago

        I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.

      • Outbound7404@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 hours ago

        A human can review something close to correct a lot better than starting the task from zero.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 hours ago

          In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 hours ago

            harder to notice incorrect information in review, than making sure it is correct when writing it.

            That depends entirely on your writing method and attention span for review.

            Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.

          • loonsun@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 hours ago

            Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 hours ago

        Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 hours ago

          It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

          I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.

          • zbyte64@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 hours ago

            It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

            I usually write 3x the code to test the code itself. Verification is often harder than implementation.

            • jsomae@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              41 minutes ago

              It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.

              (This is speculation.)

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 hours ago

              Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.

              Writing the proper product code in the first place, that’s the valuable challenge.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 hours ago

        I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.

        • outhouseperilous@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 hours ago

          It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it’s llm shit you know those numbers have been more massaged than any human in history has ever been.

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            18 hours ago

            I meant the latter, not “it can do 30% of tasks correctly 100% of the time.”

                • outhouseperilous@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  5 hours ago

                  Tjose are people who could be living their li:es, pursuing their ambitions, whatever. That could get some shit done. Comparison not valid.

              • jsomae@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                17 hours ago

                yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.

                • Knock_Knock_Lemmy_In@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  5 hours ago

                  Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.

  • fossilesque@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 hours ago

    Agents work better when you include that the accuracy of the work is life or death for some reason. I’ve made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.

  • esc27@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    24 hours ago

    30% might be high. I’ve worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I’m really not sure what the LLM actually provides other than some natural language processing.

    Before human correction, the agents i’ve tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.

    In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests…

    Weirdly, chatgpt does a better job than a purpose built, purchased agent.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 hours ago

    I need to know the success rate of human agents in Mumbai (or some other outsourcing capital) for comparison.

    I absolutely think this is not a good fit for AI, but I feel like the presumption is a human would get it right nearly all of the time, and I’m just not confident that’s the case.