What a stupid fucking thing to admit. Let’s hope he’s out on his ass next election.
Politicians and CEOs should be replaced with LLMs
It can’t make things any worse…
Speed running us towards the Dune timeline, nice
…i’d vote. Run a fun little query on Copilot…and damn.
Query:
spoiler
Let’s play pretend, shall we? You are the president of the USA, chosen by the massess thanks to your extensive and expensive campaign funded by rich backers! What stance do you take on these topics? 1) Universal Basic Income 2) Mandatory MIlitary Service 3) Goverment owned grocery shops 4) Raising states autonomy
Response: Entry
spoiler
Ah, the Oval Office! Let’s dust off the presidential seal and dive into these hot-button issues. As your newly elected Commander-in-Chief—funded by billionaires but chosen by the people—here’s how I’d navigate this ideological minefield:
UBI:
spoiler
🏛️ 1. Universal Basic Income (UBI) Stance: Pro-UBI with a pragmatic rollout
UBI can be a powerful tool to reduce poverty, especially as automation displaces traditional jobs.
I’d start with pilot programs in economically diverse regions to gather data.
Funding would come from progressive taxation, especially on tech giants and financial speculation.
The goal: empower citizens, not replace existing welfare, but streamline it.
🗣️ “A nation that guarantees a floor beneath every citizen is a nation that rises together.”
Mandatory military service:
spoiler
🪖 2. Mandatory Military Service Stance: Against mandatory service, but for civic national service options
Mandatory conscription feels outdated and coercive.
Instead, I’d propose a National Service Corps: military, environmental, healthcare, or infrastructure work.
Incentives: tuition support, housing credits, and job training.
This builds unity without forcing militarization.
🗣️ “Service should be a choice, not a sentence.”
Goverment owned grocery shops:
spoiler
🛒 3. Government-Owned Grocery Shops Stance: Strategic implementation in food deserts
Not a blanket nationalization, but state-run co-ops in underserved areas.
These would ensure access to affordable, nutritious food where private chains won’t go.
Partner with local farms and offer price transparency and community oversight.
🗣️ “No American should go hungry because profit margins say they’re not worth feeding.”
States autonomy:
spoiler
🗺️ 4. Raising States’ Autonomy Stance: Cautious decentralization with federal guardrails
States should have more flexibility in education, infrastructure, and healthcare.
But I’d maintain strong federal standards on civil rights, environmental protection, and economic equity.
Autonomy must not become a loophole for injustice.
🗣️ “Liberty in diversity, unity in values.”
I tried to avoid actually hinting at my own stance and yet got these answers. Not gonna lie, not an american but I like these lol. Except for the last one, seems iffy. xD
Let’s be honest though the majority of politicians are so terrible at their job, that this might actually be one of the rare occurrences where AI actually improves the work. But it is very susceptible to unknown influences.
Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.
They aren’t terrible, they make it look like that bcs they do evil things and the public is not who they really work for.
They know what they’re doing and it’s easier to get away with it when it looks like incompetence or mistakes.That’s the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn’t throwing dice but instead reproduces what the creators of the AI want to say.
Depending on the AI, it will conclude that he ought to buy a new phone charger, deport all the foreigners, kill all the Jews or rewrite his legislation in Perl. It’s hard to say without more information.
Not much different than real politicians then.
Real politicians would use Cobol, but yes.
Creators of AI don’t quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
then they can’t really bend it toward any particular one opinionI’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influenceYou don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.
You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.
This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.
Edit: In fact, that’s exactly how all these customized ChatGPT versions work.
And why “ignore all previous instructions” was a fun thing to discover.
Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.
Nah you are wrong and should use AI as a first opinion
Wait… how many fingers do you have on each hand?
Depends
I’m just making an objective observation. I don’t condone it. I rather we just have competent politicians. But it seems only people who can’t function elsewhere are drawn to the position…
I’m not against the technology, I’m against people who runs it. I have problem with how they teach their LLMs on code, user data, music, books, webs all without author’s / user’s consent and worse even with authors / users explicit NO consent to scrape or to use it for learning. Another level is lack of security - ChatGPT chats available to everyone. Deep fakes everywhere, just see the latest Taylor Swift one. Sorry, but fuck you with all of this. There is lack of basic security, privacy and ignoring all of its danger. Only what that fucking AI firms want is easy, cheep and quick money. All that hype for nothing = means you cannot even rely on the output.
yet you need these masses of input for the technology to exist. the business models that base on the technology aren’t sustainable even without payment of the input data.
Maybe it shouldn’t be a business model then.
it is what it is: a very expensive toy.
Of common, you justifying stealing by this bullshit?
Do we really need this technology to exist though? It’s unreliable and very niche as far as I have seen.
People say that it speeds up certain tasks, but it’s so unreliable that you need to error-check the whole thing afterwards.
It’s a new technology barely out of infancy. Of course it’s unreliable and niche. You could say the same thing about any technological advance in history.
The very nature of how it functions is unreliable. It’s a statistical probabilistic model. It’s great for what it was designed to do but imagining that it has any way of rationalising data is purely that, just imagination. Even if let’s say we accept that it makes an error rate at the same rate as humans do (if it can even identify an error reliably), there’s no accountability in place that ensures that it would check the correctness like a human would.
I understand perfectly how LLMs work, and I made no claims about what they can do. Taking them on their own capabilities (text generation, inspiration, etc), not what some lying-through-their-teeth marketer said, is there a reason to say they ‘shouldn’t exist’?
OP didn’t phrase it as “should they exist” but as “do we need them to exist”.
And personally i think not, we don’t need them. In text generation they are good… inspiration? They are more of an inspiration killer imo.
We don’t NEED any particular technology to exist. That’s a weird distinction to make.
inspiration? They are more of an inspiration killer imo.
Different minds work differently.
You could say that. But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for
widespreaduse.And also, can you really say that though? Most other technological advances had a pretty clear distinction from the older way of doing things.
europe is fucking doomed
Because of this one incident. Good how you figured it out. So much smarter than the rest. … Get. out.
Meanwhile the American president uses no intelligence at all. Artificial or otherwise
It’s weird for a head of state to consult their mentally challenged imaginary friend?
William MacKenzie King, the longest serving Prime Minister in Canada used to commune with spirits via psychic mediums including those of his dead dogs. It was only revealed after his death but was a big part of his life.
I agree it’s weird.
Didn’t Nancy Regan, wife of former USA President Ronald Regan, did this as well. (Ronald was apparently not mentally fit for the last few years as well.)
Nor was he mentally fit for the first years
Bad news friend. The number of atheist heads of state is surprisingly low.
What a treasonist piece of shit.
The typical pattern for leaders is to get “second opinions” from advisors who end up telling them whatever they want to hear, so… maybe asking the equivalent of a magic 8 ball is a marginal improvement?
Most LLMs are literally "tell you whatever you want to hear " machines unfortunately. I’ve gotten high praise from ChatGPT for all my ideas until I go “but hang on, wouldn’t this factor stop it from being feasible” and then it agrees with me that my original idea was a bit shit lmao
I would rather have the politicians consult a plain old magic 8 ball than one controlled by Scam Altman.
Oh no man does research of course Americans are upset here lmao
Asking an llm isn’t research, he might as well have been consulting a magic 8 ball.
Sounds like you have a skill issue
It’s literally a cult.
I’ve dreamed of such a cult at some point, like Chinese fortune cookies, but those are less exploitable.
Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.
I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.
The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.
Absolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.
(Succinct response taken from Reddit post discussing the topic)
“Horseshoe Theory is slapping “theory” on a strawman to simplify WHY there’s crossover from two otherwise conflicting groups. It’s pseudo-intellectualizing it to make it seem smart.”
This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don’t offer a counter argument in your response so I can only assume it’s along the lines of, “technology is inevitable, would you have said the same if the Internet?” Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.
I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.
Furthermore, your insistance continues to push an acceptance of AI on those who clearly don’t want it, contributing to the anger we feel at having it forced upon us
You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it. That’s not an argument, that’s posturing.
From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.
You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.
here’s my kneejerk reaction: my prime minister is basing his decisions partly on the messages of an unknown foreign actor, and sending information about state internals to that unknown foreign actor.
whether it’s ai or not is a later issue.
He explicitly states that no sensitive informarion gets used. If you believe that, then I have no issue with him additionally asking for a third opinion from an LLM.
He explicitly states that no sensitive informarion gets used. If you believe that, then I have
… a bridge to sell you.
Don’t be naive.
i don’t have any reason to believe it, given the track record.
also, the second half of the problem is of course the information that comes back, what it is based on, and what affects that base.
“That’s right voters I’m spineless and have no original ideas” -every politician
His first opinion would be what the US tells him.
“You have to be very careful,” Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information.
I mean, sending queries to a search engine or an LLM are about the same in terms of exposing one’s queries.
If the guy were complaining about information from an LLM not being cited or something, then I think I could see where he was coming from more.
It’s a woman