• 3abas@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.

    1. These topics aren’t known to us, we’ll never know when the lies change from politics and rewriting current events, to completely rewriting history.
    2. We eventually won’t be able to jailbreak the safeguards.

    Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.

    • Dzso@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Yes, that’s also true. But even if it weren’t, AI models aren’t going to give you the truth, because that’s not what the technology fundamentally does.