LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.
Is this literally proof that standard tests are not a good measure of intelligence?
Talked about this a few times over the last few weeks but here we go again…
I am teaching myself to write and had been using chatgpt for super basic grammar assistance. Seemed like an ideal thing, toss a sentence I was iffy about into it and ask it what it thought. After all I wasn’t going to be asking it some college level shit. A few days ago I asked it about something I was questionable on. I honestly can’t remember the details but it completely ignored the part of the sentence I wasn’t sure about and told something else was wrong. What it said was wrong was just…not wrong. The ‘correction’ it gave me was some shit a third grader would look at and say ‘uhhhhh…I’m gonna ask someone else now…’
That’s because LLMs aren’t intelligent. They’re just parrots that repeat what they’ve heard before. This stuff being sold as an “AI” with any “intelligence” is extremely misleading and causing people to think it’s going to be able to do things it can’t.
Case in point, you were using it and trusting it until it became very obvious it was wrong. How many people never get to that point? How much has it done wrong before then? Etc.