• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Warning, here’s the cynic in me coming out.

    The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles. So how I break down this article:

    1. Lean on Betterridge’s law of headlines to cast doubt about the long term prospects of LLMs
    2. Further the doubt by pointing out people don’t trust them
    3. Present them as a credible threat later in the article
    4. Juxtapose LLMs and cryptocurrencies while technically dismissing such a link (then why bring it up?)
    5. Leave the conclusion up to the reader

    I learned nothing new about current or long term LLM viability other than a vague “they took our jerbs!” emotional jab.

    Or maybe my tinfoil hat is on too tight.

    • sudo42@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles.

      The writers and editors may be against AI, but I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources. The second upper management thinks this is possible, the humans will be out on their collective ears.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources

        No way. NYT depends on their ability to produce high quality exclusive content that you can’t access anywhere else.

        In your hypothetical future, NYT’s content would be mediocre and no better than a million other news services. There’s no profit in that future.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.

      Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

      He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).

      • BakerBagel@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I would say that 90% of AI companies are fake. They are just running API calls to ChatGP-3, and calling themselves “AI” to get investors. Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

        • QuadratureSurfer@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          I don’t think that “fake” is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being “powered by AI” or some other nonsense.

          Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

          This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn’t mean that the AI is “fake”. Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

          Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn’t get you the kind of data that you get when you actually put it into a real world environment.

          In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.

          • erwan@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            It might not be fake but companies built on top of the OpenAI API don’t bring significant value and won’t last.

            If you already have a solid product and want to add some AI capabilities, the the OpenAI API is great. If it’s your only value proposition, not so much.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            I honestly don’t understand why they didn’t just use RFID for the grocery stores. Or maybe they are, idk, but it’s cheap and doesn’t require much training to apply. That way you can verify the AI without needing much labor at all.

            Then again, I suppose that point wasn’t to make a grocery service, but an optical AI service to sell to others.

            That said, a lot of people don’t seem to understand how AI works, and the natural response to not understanding something is FUD.

            • abhibeckert@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              6 months ago

              Unless you pay for expensive tags (like $20 per tag) or use really short range scanners (e.g. a hotel key), RFID tags don’t work reliably enough.

              Antitheft RFID tags for example won’t catch every single thief who walks out the door with a product. But if a thief comes back again and again stealing something… eventually one of them will work.

              But even unreliable tags are a bit expensive, which is why they are only used on high margin and frequently stolen products (like clothing).

              All the self serve stores in my country just use barcodes. They are dirt cheap and work reliably at longer range than a cheap RFID tag. Those stores use AI to flag potential thieves but never for purchases (for example recently I wasn’t allowed to pay for my groceries until a staff member checked my backpack, which the AI had flagged as suspicious).

          • BakerBagel@midwest.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            Mechanical Turkis a service that Amazon sells to other companies that are trying to pretend to be AI companies. the whole market is full of people making wild claims aboit their product that aren’t true, and them desperately searching for the cheapest labor to actually do it.

            I’m not actually a nuclear fission company if i take millions of R&D investment, pay me amd my buddy half of it, and then pay a bunch of crackheads to pour diesel into an electric generator.

            • QuadratureSurfer@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              After reading through that wiki, that doesn’t sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

              Contrary to your statement, Amazon isn’t selling this as a means to “pretend” to do AI work, and there’s no evidence of this on the page you linked.

              That’s not to say that this couldn’t be used to fake an AI, it’s just not sold this way, and in many applications it wouldn’t be able to compete with the already existing ML models.

              Can you link to any examples of companies making wild claims about their product where it’s suspected that they are using this service? (I couldn’t find any after a quick Google search… but I didn’t spend too much time on it).

              I’m wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that’s necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).