• besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I still don’t understand what Anthropic is trying to achieve with all of these stunts showing that their LLMs go off the rails so easily. Is it for gullible investors? Why would a consumer want to give them money for something so unreliable?

    • LostWanderer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      A fool is ever eager to give their money to that which doesn’t work as intended. Provided the surrounding image provides a mystique or resonates with their internal vision of what an ‘AI’ is. It’s pure marketing on their part, Anthropic believes that any press is good press. It makes investors drool over a refined AI, even though, Apple themselves have proven it through their many technical papers current AI is merely ‘smoke and mirrors’ however…For some odd reason, they are still developing their ‘Apple Intelligence’. They are huffing farts just as much as Anthropic is, they have to constantly pull stunts to gaslight their investors into believing that ‘AI’ is going to become a viable product that will make money. Or allow them to get rid of human workers, so their bottom line looks flush (spoiler alert, they have to rehire people, as AI can’t do many of the things a live person with training can).

      There reason why this shit is shoved in everything is because it doesn’t have good general use cases and the collection of usage data from people. Most people don’t give money to AI companies, only those who have drank the Kool-Aid do, as they are hope-posting and gaslighting people into believing the current or future capabilities of ‘AI’. LLMs are really great at specific things, collating fine-tuned databases and making them highly searchable by specialists in a field. However, as always the techbros always want to do too much, they need to make a ‘wonder tool’ that inevitably fails and then these lying techbros need to quickly figure out the next scam.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      The latest We’re In Hell revealed a new piece of the puzzle to me, Symbolic vs Connectionist AI.

      As a layman I want to be careful about overstepping the bounds of my own understanding, but from someone who has followed this closely for decades, read a lot of sci-fi, and dabbled in computer sciences, it’s always been kind of clear to me that AI would be more symbolic than connectionist. Of course it’s going to be a bit of both, but there really are a lot of people out there that believe in AI from the movies; that one day it will just “awaken” once a certain number of connections are made.

      Cons of Connectionist AI: Interpretability: Connectionist AI systems are often seen as “black boxes” due to their lack of transparency and interpretability.

      Transparency and accountability are negatives when being used for a large number of applications AI is currently being pushed for. This is just THE PURPOSE.

      Even taking a step back from the apocalyptic killer AI mentioned in the video, we see the same in healthcare. The system is beyond us, smarter than us, processing larger quantities of data and making connections our feeble human minds can’t comprehend. We don’t have to understand it, we just have to accept its results as infallible and we are being trained to do so. The system has marked you as extraneous and removed your support. This is the purpose.

    • cubism_pitta@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      People who don’t understand and read these articles and think Skynet. People who know their buzz words think AGI

      Fortune isn’t exactly renowned for its Technology journalism

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      I think part of it is that they want to gaslight people into believing they have actually achieved AI (as in, intelligence that is equivalent to and operates like that of a human’s) and that these are signs of emergent intelligence, not their product flopping harder than a sack of mayonnaise on asphalt.