Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

    • 5too@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      That is a fascinating take on the general reaction to LLMs. Thanks for posting this!

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    I for one will not be putting any gibberish into Google’s AI for any reason. I don’t find it fun. I find it annoying and have taken steps to avoid it completely on purpose. I don’t understand these articles that want to throw shade at AI LLM’s by suggesting their viewers go use the LLM’s which only helps the companies that own the LLM’s.

    Like. Yes. We have established that LLM’s will give misinformation and create slop because all their data sets are tainted. Do we need to continue to further this nonsense?

  • ParadoxSeahorse@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    The saying “you can’t cross over a duck’s river” is a play on words, suggesting that it’s difficult to cross a river that is already filled with ducks. It’s not a literal statement about rivers and ducks, but rather an idiom or idiom-like phrase used to express the idea that something is difficult or impossible to achieve due to the presence of obstacles or challenges.

    I used the word “origin” instead of “meaning”, which didn’t seem to work.

  • webadict@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    The saying “you can’t butter a fly” is an idiom expressing that someone or something is too difficult to influence or manipulate. It’s rooted in the idea that butterflies, with their delicate nature, are virtually impossible to convince to do anything against their will, let alone “butter” them in a literal sense.

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      That’s only a partial saying. The whole is “you can’t butter a fly, but you can make a bi flutter.”

    • Deebster@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      This is a great example - it kinda makes sense if you skim read it but butterflies have nothing to do with butter, just like hotdogs have nothing to do with dogs.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        The Germans have it right, their word for butterfly means something like “little crusher.”

    • surewhynotlem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      No, that phrase means “this situation is hopeless because the person is incapable of change”. You can’t turn a fly into a butterfly, no matter how hard you try.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        An actual idiom where I live is to say “I feel rougher than a badger’s arse” when hung over.

        That might be a motivation not to lick one a second time.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      http://www.newforestexplorersguide.co.uk/wildlife/mammals/badgers/grooming.html

      Mutual grooming between a mixture of adults and cubs serves the same function, but additionally is surely a sign of affection that strengthens the bond between the animals.

      A variety of grooming postures are adopted by badgers but to onlookers, the one that is most likely to raise a smile involves the badger sitting or lying back on its haunches and, with seemingly not a care in the world (and with all hints of modesty forgotten), enjoying prolonged scratches and nibbles at its under-parts and nether regions.

      That being said, that’s the European badger. Apparently the American badger isn’t very social:

      https://a-z-animals.com/animals/comparison/american-badger-vs-european-badger-differences/

      American badger: Nocturnal unless in remote areas; powerful digger and generally more solitary than other species. Frequently hunts with coyotes.

      European badger: Digs complicated dens and burrows with their familial group; one of the most social badger species. Depending on location, hibernation may occur.

  • GooberEar@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    14 days ago

    I live in a part of the USA where, decades later, I still hear brand new and seemingly made-up idioms on a fairly regular basis. This skill set, making sense of otherwise fake sounding idioms based on limited context, is practically a necessity 'round these parts. After all, you can’t feed a cow a carrot and expect it to shit you out a cake.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    The saying “better a donkey than an ass” plays on the dual meaning of the word “ass.” It suggests that being called a donkey is less offensive than being called an ass, which can be used as an insult meaning stupid or foolish. The phrase highlights the contrast between the animal donkey, often seen as a hardworking and steady companion, and the derogatory use of “ass” in everyday language.

    Yep, it does work

  • MyTurtleSwimsUpsideDown@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    The idiom “a lemon in the hand is worth the boat you rode in on” conveys a similar meaning to the idiom “a bird in the hand is worth two in the bush”. It emphasizes that it’s better to appreciate what you have and avoid unnecessary risks or changes, as a loss of a current advantage may not be compensated by a potential future gain. The “lemon” represents something undesirable or less valuable, but the “boat” represents something that could potentially be better but is not guaranteed.

  • NOT_RICK@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    I just tested it on Bing too, for shits and giggles

    you can’t butter the whole world’s bread meaning

    The phrase “you can’t butter the whole world’s bread” means that one cannot have everything

  • Ganbat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    Tried it. Afraid this didn’t happen, and the AI was very clear the phrase is unknown. Maybe I did it wrong or something?

  • Nurse_Robot@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    Didn’t work for me. A lot of these ‘gotcha’ AI moments seem to only work for a small percentage of users, before being noticed and fixed. Not including the more frequent examples that are just outright lies, but get upvoted anyway because ‘AI bad’

    • Deebster@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      I found that trying “some-nonsense-phrase meaning” won’t always trigger the idiom interpretation, but you can often change it to something more saying-like.

      I also found that trying in incognito mode had better results, so perhaps it’s also affected by your settings. Maybe it’s regional as well, or based on your search result. And, as AI’s non-deterministic, you can’t expect it to always work.

    • Ironfacebuster@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      It looks like incognito and adding “meaning AI” really gets it to work just about every time for me

      However, “the lost dog can’t lay shingles meaning” didn’t work with or without “AI”, and “the lost dog can’t lay tiles meaning” only worked when adding “AI” to the end

      So it’s a gamble on how gibberish you can make it I guess

    • GraniteM@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      Now I’ll never know what people mean when they say “those cupcakes won’t fill a sauna”!

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    14 days ago

    Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say “meaning?” and see how they respond.

    Pretty sure most of mine will just make up a bullshit response nd go along with what I’m saying unless I give them more context.

    There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.

    • Deebster@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”

      The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.

      * I’m not counting the little disclaimer because we’ve been taught to ignore smallprint from being faced with so much of it

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        14 days ago

        My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”

        Ok, but the point is that lots of people would just say something and then figure out if it’s right later.

        The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.

        Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.

        • Deebster@infosec.pubOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and “hallucinations” are a real problem when they lead to real decisions and real consequences.

          If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            14 days ago

            If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.

            Lmfao. Yeah, ok, let’s get my predictions from the depressing show dedicated to being relentlessly pessimistic at every single decision point.

            And yeah, like I said, you sound like my hysterical middle school teacher claiming that Wikipedia will be society’s downfall.

            Guess what? It wasn’t. People learn that tools are error prone and came up with strategies to use them while correcting for potential errors.

            Like at a fundamental, technical level, components of a system can be error prone, but still be useful overall. Quantum calculations have inherent probabilities and errors in them, but they can still solve some types of calculations so much faster than normal computers that you can run the same calculation 100x on a Quantum Computer, average out the results to remove the outlying errors, and get to the right answer far faster than a classical computer.

            Computer chips in satellites and the space station are constantly have random bits of memory flipped by cosmic rays but they still work fine because their RAM is special, error correcting ram, that can use similar methods to verify and check for errors.

            Designing for error correction is a thing, and people are perfectly capable of doing so in their personal lives.

          • desktop_user@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            and this is why humans are bad, a tool is neither good or bad, sure a tool can use a large amount of resources to develop only to be completely obsolete in a year but only humans (so far) have the ability (and stupidity) to be both in charge of millions of lives and trust a bunch of lithographed rocks to create tarrif rates for uninhabited islands (and the rest of the world).

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      it highlights the fact that these LLMs refuse to say “I don’t know”, which essentially means we cannot rely on them for any factual reporting.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        But a) they don’t refuse, most will tell you if you prompt them well them and b) you cannot rely on them as the sole source of truth but an information machine can still be useful if it’s right most of the time.