LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

  • cynar@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    The key difference is that your thinking feeds into your word choice. You also know when to mack up and allow your brain to actually process.

    LLMs are (very crudely) a lobotomised speech center. They can chatter and use words, but there is no support structure behind them. The only “knowledge” they have access to is embedded into their training data. Once that is done, they have no ability to “think” about it further. It’s a practical example of a “Chinese Room” and many of the same philosophical arguments apply.

    I fully agree that this is an important step for a true AI. It’s just a fragment however. Just like 4 wheels, and 2 axles don’t make a car.