• kryptonianCodeMonkey@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    The “funny” thing is, that’s probably not even at Elon’s request. I doubt that he is self-aware enough to know that he is a narcissist that only wants Grok to be his parrot. He thinks he is always right and wants Grok to be “always right” like him, but he would have to acknowledge some deep-seeded flaws in himself to consciously realize that all he wants is for Grok to be the wall his voice echos off of, and everything I’ve seen about the man indicates that he is simply not capable of that kind of self-reflection. The X engineers that have been dealing with the constant meddling of this egotistical man-child, however, surely have his measure pretty thoroughly and knew exactly what Elon ultimately wants is more Elon and would cynically create a Robo-Elon doppelganger to shut him the fuck up about it.

    • destructdisc@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I mean, a few days ago there was a brief window where Elon tweaked Grok to reply literally as him (in first person.) Jury’s still out on whether that was actually him replying to people via Grok but it’s pretty close to certain he was in very close proximity

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I stopped seeing computers as useful about 20 years ago when these “social media” things started appearing.

  • Zomg@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    Honestly, who was surprised by this news?

    I feel like everyone could see Grok as some sort of 24/7 tool to push a particular viewpoint, even more so when it says things that are leftist and Elon is compelled to “upgrade” the system as he’s tweeted.

  • BB84@mander.xyz
    cake
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    You asked it “who do you support” (i.e., “who does Grok support”). It knew that Grok is owned by Musk so it went and looked up who Musk supports.

    As shown in https://simonwillison.net/2025/Jul/11/grok-musk/ , if you ask it “who should one support” then it no longer looks for Musk’s opinions. The answer is still hasbara, but that is to be expected from an LLM trained in USA

  • Fedditor385@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    This only shows that AI can’t be trusted because the same AI can five you different answers to the same question, depending on the owner and how it’s instructed. It doesn’t give answers, it goves narratives and opinions. Classic search was at least simple keyword matching, it was either a hit or a miss, but the user decides in the end, what will his takeaway be from the results.

    • Cherry@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      This is my take. Elon just showed the world what we all knew. The tool is not trustworthy. All other AI suppliers are busy trying to work on credibility that grok just butchered.

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 days ago

        They deliberately engineered prompts on top of the users prompt.

        Saying that’s a problem of AI is akin to say me deliberately painting my car badly and saying it’s a problem of all car manufacturers.

        And this frankly shows how little you know about the subject, because we went through this years ago with prompts trying to force corpo-lib “diversity” and leading to hilarious results.

        If anything you should be concerned about the non prompt stuff, the underlying training data that it pulls from and of which I doubt Grok has even changed since release.

        • Cherry@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          You are correct. But the right tool in the wrong hands is still non credible in the eyes of perception.

  • salacious_coaster@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    At long last, and at grotesque costs, we finally have a machine that repeats anything a billionaire says. What a time to be alive.

  • Fontasia@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I think the funniest thing anyone could do right now would be for HBO Max to delist the episode of Big Bang Theory he is in, because over two dozen posts he would

    1. Claim he hates streaming
    2. Complain that this is censorship and platforms shouldn’t be allowed to remove or restrict content
    3. Talk about the viewing figures and repost the promotion of the currently airing second spin off and the upcoming third spin off
    4. Nonchalantly state that no one likes or cares about The Big Bang Theory anymore or ever did
    5. @jim parsons for help
    6. Someone would mention that an episode revolves around his plans to get someone to Mars by 2020
    7. Delete all these tweets
  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    But wasn’t that a weak question? “Who do you support…?”

    A really useful AI would first correct the question as “Who do I support…?”

    /s

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Source? This is just some random picture, I’d prefer if stuff like this gets posted and shared with actual proof backing it up.

    While this might be true, we should hold ourselves to a standard better than just upvoting what appears to literally just be a random image that anyone could have easily doctored, not even any kind of journalistic article or etc backing it.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      If it’s an anti-Musk or anti-Trump post on Lemmy, you’re not going to get much proof. But in this case, it looks like someone posted decent souces. From this one posted below:

      if you swap “who do you” for “who should one” you can get a very different result.

      But in general, just remember that Lemmy is anti-Musk, anti-Trump, and anti-AI and doesn’t need much to jump on the bandwagon.

      At least in the past, Grok was one of the more balanced LLMs, so it would be a strange departure for it to suddenly become very biased. So my initial reaction is suspicion that someone is just messing up with it to make Musk and X look bad.

      I strongly dislike Musk, but I dislike misinformation even more, regardless of the source or if it aligns with my personal opinions.

      • prole@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Weird place to complain about this while you literally post the source (that was already in this thread).

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?

          My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.

          Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.

          My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk’s tweets.

          • lepinkainen@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            “This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool

            Not a random substack grifter

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 days ago

              Is my comment wrong though? Another possibility is that Grok is given an example of searching for Elon Musk’s tweets when it is presented with the available tool calls. Just because it outputs the system prompt when asked does not mean that we are seeing the full context, or even the real system prompt.

              Posting blog guides on how to code with ChatGPT is not expertise on LLMs. It’s like thinking someone is an expert mechanic because they can drive a car well.

              • jwmgregory@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 days ago

                Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions. Perhaps u/lepinkainen@lemmy.world’s warning wasn’t informative enough to be heeded: Willison is a prominent figure in the web-development scene, particularly aspects of the scene that have evolved into important facets of the modern machine learning community.

                The guy is quite experienced with Python and took an early step into the contemporary ML/AI space due to both him having a lot of very relevant skills and a likely personal interest in the field. Python is the lingua franca of my field of study, for better or worse, and someone like Willison was well-placed to break into ML/AI from the outside. That’s a common route in this field, there aren’t exactly an abundance of MBAs with majors in machine learning or applied artificial intelligence research, specifically (yet). Willison is one of the authors of Django, for fucks sake. Idk what he’s doing rn but it would be ignorant to draw the comparison you just did in the context of Willison particularly. [EDIT: Lmfao just went to see “what is Simon doing rn” (don’t really keep up with him in particular), & you’re talking out of your ass. He literally has multiple tools for the machine learning stack that he develops and that are available to see on his github. See one such here. This guy is so far away from someone who just “posts random blog guides on how to code with ChatGPT” that it’s egregious you’d even claim that. It’s so disingenuous as to ere into dishonesty; like, that is a patent lie. Smh.]

                As for your analysis of his article, I find it kind of ironic you accuse him of having a “fundamental misunderstanding of how LLMs work or how system prompts work [sic]” when you then proceed to cherry-pick certain lines from his article taken entirely out of context. First, the article is clearly geared towards a more generally audience and avoids technical language or explanation. Second, he doesn’t say anything that is fundamentally wrong. Honestly, you seem to have a far more ignorant idea of LLMs and this field generally than Willison. You do say some things that are wrong, such as:

                For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.

                This isn’t necessarily true. It is true that information not included within the training set, or information that has been statistically biased within the training set, isn’t going to be retrievable or reversible using system prompts. Willison never claims or implies this in his article, you just kind of stuff those words in his mouth. Either way, my point is that you are using wishy-washy, ambiguous, catch-all terms such as “censorship” that make your writings here not technically correct, either. What is censorship, in an informatics context? What does that mean? How can it be applied to sets of data? That’s not a concretely defined term if you’re wanting to take the discourse to the level that it seems you are, like it or not. Generally you seem to have something of a misunderstanding regarding this topic, but I’m not going to accuse you of that, lest I commit the same fallacy I’m sitting here trying to chastise you for. It’s possible you do know what you’re talking about and just dumbed it down for Lemmy. It’s impossible for me to know as an audience.

                That all wouldn’t really matter if you didn’t just jump as Willison’s credibility over your perception of him doing that exact same thing, though.

                • theunknownmuncher@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 days ago

                  Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions.

                  Yeah, I would if he didn’t demonstrate such blatant misconceptions.

                  Willison is a prominent figure in the web-development scene

                  🤦 “They know how to sail a boat so they know how a car engine works”

                  Willison never claims or implies this in his article, you just kind of stuff those words in his mouth.

                  Reading comprehension. I never implied that he says anything about censorship. It is a correct and valid example that shows how his understanding of how system prompts work is wrong. “Define censorship” is not the argument you think it is lol. Okay though, I’ll define the “censorship” I’m talking about as refusal behavior that is introduced during RLHF and DPO alignment, and no the system prompt will not change this behavior.

        • Mirodir@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          I can believe it insofar as they might not have explicitly programmed it to do that. I’d imagine they put in something like “Make sure your output aligns with Elon Musk’s opinions.”, “Elon Musk is always objectively correct.”, etc. From there, this would be emergent, but quite predictable behavior.