• pelespirit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Not to help the AI companies, but why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff? It’s obvious they’re shit at it, why do they answer anyway? It’s because they’re programmed by know-it-all programmers, isn’t it.

    • CileTheSane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?

      Because the AI doesn’t know what it’s being asked, it’s just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.

      “Why doesn’t the man in the Chinese room just use a calculator for math questions?”

    • fmstrat@lemmy.nowsci.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      This is where MCP comes in. It’s a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.

    • veroxii@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      They are starting to do this. Most new models support function calling and can generate code to come up with math answers etc

    • ImplyingImplications@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      why don’t they program them

      AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.

      Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.

      There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.

    • NoiseColor @lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      …or a simple counter to count the r in strawberry. Because that’s more difficult than one might think and they are starting to do this now.

    • four@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I think they’re trying to do that. But AI can still fail at that lol

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)

      • MrJgyFly@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Or they keep telling you that you just have to wait it out. It’s going to get better and better!