I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!
Critical thinking skills are what hold me back from relying on ai
Microsoft said it so I guess it must be true then 🤷♂️
Good. Maybe the dumbest people will forget how to breathe, and global society can move forward.
Microsoft will just make a subscription AI for that, BaaS.
Which we will rebrand “Bullshit as a service”!
I thought that’s what it means?
No, he said Breath as a service, which is funny!
Oh you can guarantee they won’t forget how to vote 😃
Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.
Seriously, ask AI about anything you have expert knowledge in. It’s laughable sometimes… However you need to know, to know it’s wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.
This isn’t a profound extrapolation. It’s akin to saying “Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework.” Or “kids who watch TV lack the reading skills of kids who read books”.
Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.
This isn’t predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.
Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you’re going to suffer in your capacity for future inquiry.
All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.
After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.
Good programmers do what I described, bad programmers copy and run without reading. It’s just like SO kids.
Quickly, ask AI how to improve or practice critical thinking skills!
Improving your critical thinking skills is a process that involves learning new techniques, practicing them regularly, and reflecting on your thought processes. Here’s a comprehensive approach:
1. Build a Foundation in Logic and Reasoning
• Study basic logic: Familiarize yourself with formal and informal logic (e.g., learning about common fallacies, syllogisms, and deductive vs. inductive reasoning). This forms the groundwork for assessing arguments objectively.
• Learn structured methods: Books and online courses on critical thinking (such as Lewis Vaughn’s texts) provide a systematic introduction to these concepts.
2. Practice Socratic Questioning
• Ask open-ended questions: Challenge assumptions by repeatedly asking “why†and “how†to uncover underlying beliefs and evidence.
• Reflect on responses: This method helps you clarify your own reasoning and discover alternative viewpoints.
3. Engage in Reflective Practice
• Keep a journal: Write about decisions, problems, or debates you’ve had. Reflect on what went well, where you might have been biased, and what could be improved.
• Use structured reflection models: Approaches like Gibbs’ reflective cycle guide you through describing an experience, analyzing it, and planning improvements.
4. Use Structured Frameworks
• Follow multi-step processes: For example, the Asana article “How to build your critical thinking skills in 7 steps†suggests: identify the problem, gather information, analyze data, consider alternatives, draw conclusions, communicate solutions, and then reflect on the process.
• Experiment with frameworks like Six Thinking Hats: This method helps you view issues from different angles (facts, emotions, positives, negatives, creativity, and process control) by “wearing†a different metaphorical hat for each perspective.
5. Read Widely and Critically
• Expose yourself to diverse perspectives: Reading quality journalism (e.g., The Economist, FT) or academic articles forces you to analyze arguments, recognize biases, and evaluate evidence.
• Practice lateral reading: Verify information by consulting multiple sources and questioning the credibility of each.
6. Participate in Discussions and Debates
• Engage with peers: Whether through formal debates, classroom discussions, or online forums, articulating your views and defending them against criticism deepens your reasoning.
• Embrace feedback: Learn to view criticism as an opportunity to refine your thought process rather than a personal attack.
7. Apply Critical Thinking to Real-World Problems
• Experiment in everyday scenarios: Use critical thinking when making decisions—such as planning your day, solving work problems, or evaluating news stories.
• Practice with “what-if†scenarios: This helps build your ability to foresee consequences and assess risks (as noted by Harvard Business’s discussion on avoiding the urgency trap).
8. Develop a Habit of Continuous Learning
• Set aside regular “mental workout†time: Like scheduled exercise, devote time to tackling complex questions without distractions.
• Reflect on your biases and update your beliefs: Over time, becoming aware of and adjusting for your cognitive biases will improve your judgment.
By integrating these strategies into your daily routine, you can gradually sharpen your critical thinking abilities. Remember, the key is consistency and the willingness to challenge your own assumptions continually.
Happy thinking!
Chat GPT et al; “To improve your critical thinking skills you should rely completely on AI.”
That sounds right. Lemme ask Gemini and DeepSink just in case.
“Deepsink” lmao sounds like some sink cleaner brand
Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you’re likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it’s purpose and it’s up to you the operator to make sure you got the edges squared(so to speak).
I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)
How many phone numbers do you know off of the top of your head?
In the 90s, my mother could rattle off 20 or more.
But they’re all in her phone now. Are luddites going to start abandoning phones because they’re losing the ability to remember phone numbers? No, of course not.
Either way, these fancy prediction engines have better critical thinking skills than most of the flesh and bone people I meet every day to begin with. The world might actually be smarter on average if they didn’t open their mouths.
Mostly just this one:
0118 999 881 999 119 725 3
But even back when we only had landed lines, I could barely remember my own phone number. I didn’t think it’s a good measure.
Memorization is not the same thing as critical thinking.
A well designed test will freely give you an equation sheet or even allow a cheat sheet.
Memorization is not the same thing as critical thinking.
A library of internalized axioms is necessary for efficient critical thinking. You can’t just turn yourself into a Chinese Room of analysis.
A well designed test will freely give you an equation sheet or even allow a cheat sheet.
Certain questions are phrased to force the reader to pluck out and categorize bits of information, to implement complex iterations of simple formulae, and to perform long-form calculations accurately without regard to the formulae themselves.
But for elementary skills, you’re often challenging the individual to retain basic facts and figures. Internalizing your multiplication tables can serve as a heuristic that’s quicker than doing simple sums in your head. Knowing the basic physics formulae - your F = ma, ρ=m/V, f= V/λ etc - can give you a broader understanding of the physical world.
If all you know how to do is search for answers to basic questions, you’re slowing down your ability to process new information and recognize patterns or predictive signals in a timely manner.
I agree with all of this. My comment is meant to refute the implication that not needing to memorize phone numbers is somehow analogous to critical thinking. And yes, internalized axioms are necessary, but largely the core element is memorizing how these axioms are used, not necessarily their rote text.
You’re right it’s not the same thing as critical thinking, but it is a skill we’ve lost. How many skills have we lost throughout history due to machines and manufacturing?
This is the same tale over and over again - these people weren’t using critical thinking to begin with if they were trusting a prediction engine with their tasks.
I think “deliberately suppressed” is different than lost.
When was the last time you did math without a calculator?
Calculators also don’t think critically.
Something something… Only phone number I remember is your mother’s phone number (Implying that is for when I’m calling her to arrange a session of sexual intercourse, that she willingly and enthusiastically participates in).
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
Exactly. It’s great for that, as long as you know what you want it to say and can verify it.
The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.
It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.
As you mentioned tho, not really specific to LLMs at all
Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.
If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.
It’s such a double edged sword though, Google is a good example, I became a netizen at a very young age and learned how to properly search for information over time.
Unfortunately the vast majority of the population over the last two decades have not put in that effort, and it shows lol.
Fundamentally, I do not believe in arbitrarily deciding who can and can not have access to information though.
I completely agree - I personally love that there’s so many Open Source AI tools out there.
The scary part is (similar to what we experienced with DeepSeek’s web interface) that its extremely easy for these corporations to manipulate, or censor information.
I should have clarified my concern - I believe we need to revisit critical thinking as a society (whole other topic) and especially so when it comes to tools like this.
Ensuring everyone using it, is aware of what it does, its flaws, how to process its output, and its potential for abuse. Similar to internet safety training for kids in the mid-2000s.
Well thank goodness that Microsoft isn’t pushing AI on us as hard as it can, via every channel that it can.
Learning how to evade and disable AI is becoming a critical thinking skill unto itself. Feels a bit like how I’ve had to learn to navigate around advertisements and other intrusive 3rd party interruptions while using online services.
Well at least they communicate such findings openly and don’t try to hide them. Other than ExxonMobil who saw global warming coming due to internal studies since the 1970s and tried to hide or dispute it, because it was bad for business.
Garbage in, Garbage out. Ingesting all that internet blather didn’t make the ai smarter by much if anything.
Duh?
Buh?
Is that it?
One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).
Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis…
As usual, it is not the AI tool who could fuck our critical thinking but ourselves.
I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc
Society will handle it sooner or later
I love how they chose the term “hallucinate” instead of saying it fails or screws up.
Because the term fits way better…
A hallucination is a false perception of sensory experiences (sights, sounds, etc).
LLMs don’t have any senses, they have input, algorithms and output. They also have desired output and undesired output.
So, no, ‘hallucinations’ fits far worse than failure or error or bad output. However assigning the term ‘hallucinaton’ does serve the billionaires in marketing their LLMs as actual sentience.
The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.
Also your ability to search information on the web. Most people I’ve seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I’ve started using kagi and I’m trying to be more intentional about it but after a bit of searching it’s often easier to just ask claude
Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.