• TheDannysaur@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    People don’t read. And before you down vote, it’s still bad.

    It was not a human system that was posing as AI. It was a shitty AI that needed a lot of human intervention.

    Yes, it’s still shit. Yes it’s still a problem with how they implemented it and how they pitched it.

    But there needs to be a higher level of criticism. Saying “it was just human labor the whole time” is flatly incorrect. The better criticism is the truth… They made AI so shitty that it needed a bunch of human interaction, and their product was really really bad.

    I’ve heard so many people state this as “there wasn’t any AI, it was just humans watching cameras.” And the false narrative distracts from the real story.

    People pretend the truth doesn’t matter, and will retreat to “well even if it was AI it was so bad so I was still basically right.” and that’s a problem.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    So-called “AI” is usually just a scam to hide human labor. The reality is this tech is not usable wthiout human curation, often requiring even more human labor than just doing things the old fashioned way.

    When is this bubble going to pop?

    • jaybone@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I can’t stand it when I see all these mainstream news stories about it all the time either. With tech ingnorant news anchors talking about it. Just continues pumping up the bubble. I worry rather than pop, it will just be a new buzzword that is here to stay. (AI was always a thing, but what we have now, these LLMs, are not really what we traditionally referred to as AI in scifi and traditional media.)

    • SparrowRanjitScaur@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      This is not true at all. Transformer models like ChatGPT have already proven to be immensely useful and helpful in the professional world. It’s not capable of doing jobs entirely on its own yet, but as a tool that helps humans do their job it’s great.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        Reread what I said. I said it’s not usable without human curation i.e. what humans do when they use it as a tool to do their job.

        • Ookami38@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          7 months ago

          You also said “often requiring even more human labor than doing things the old fashioned way” - i dare say that’s the part they were countering.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            I didn’t say that it always requires more human labor! Stable diffusion, specifically, seems like it can really reduce the amount of human labor needed to generate art. It can’t eliminate it, but it can definitely turn art from a skill that requires 10,000 hours to master into a skill that maybe requires 10 hours. Industrial de-skilling, in other words.

            But that’s the best case scenario. In many cases AI doesn’t help at all and just requires human workers to fix it as it constantly fucks up, and it doesn’t seem to get any better.

    • justdoitlater@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I think its a bit more complex than that: you are right, but just in the beginning… after the AI is trained you dont need the cheap labor anymore. Which imho makes it even worse.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Marketing hype.

        No amount of training can ever eliminate the need for human curation. This is not AI, it’s a jumped up pattern recognition engine. False positives and false negatives are inevitable without a consciousness to evaluate it. Hallucinations are an intractable problem that can not be solved, regardless of training, and so all these AI can ever be is a tool for human workers.

        It’ll take something totally different and new.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          7 months ago

          LLMs may fabricate things now and then but so do humans. I am not convinced the problem is intractable.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            You have no reason to believe the problem can be solved.

            It’s almost religious. You just have faith in technology you don’t understand.

            Keep praying to your machine spirits, maybe the Omnissiah will deliver the answer!

            • jsomae@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              7 months ago

              I have no reason to believe the problem can’t be solved, except insofar as it hasn’t been solved yet (but LLMs only recently took off). So without a good reason to believe it’s intractable, I’m at worst 50/50 on if it can be solved. Faith in the machine spirit would be if I had an unreasonably high expectation LLMs can be made not to hallucinate, like 100%.

              My expectation is around 70% that it’s solvable.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                0
                ·
                7 months ago

                You have no reason to think it can be solved. You’re just blindly putting your faith in something you don’t understand and making up percentages to make yourself sound less like a religious nut.

                • jsomae@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  7 months ago

                  If I have no reason to believe X and no reason not to believe X, then the probability of X would be 50%, no?

        • justdoitlater@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          7 months ago

          I understand what you are saying but I dont agree, look at the examples we already have: I use chatgpt at work to code, it has limitations but works without any human curation. Check midjourney as well, it has great accuracy and if you ask a picture of dogs it will create without any human intervention. Yes, it took a long time and human effort to train them, but in the end it is not needed anymore for the majority of the cases. What you say about hallucinations, innacurate results, they happen yes, but ita becoming fringe cases and less and less. Its true that its not the miracle tool that marketing says it is, thats marketing, but its much more dangerous than it looks and will definetly substitute a lot of workers, it already does.

  • Jesus@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    I was under the impression that mechanical Turks were powered by Turkish people, not Indians.