• Xylight@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 days ago

    I thought it was just me, I was messing with gemini-2.5-flash API yesterday and it repeated letters into oblivion

    my bot is named clode in reference to claude, but its running on gemini

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      What’s the associated system instruction set to? If you’re using the API it won’t give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they’re essentially just “predict the next word” functions at heart.

      • Xylight@lemdro.id
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        20 days ago

        Here’s my system prompt:

        You are Clode, referred to by {{ handle }}, a member of a chatroom.
        ## Personality
        Conversational, witty, banterous, dry, direct, concise. Avoid capitalization, periods, analogies.
        Construct plausible stories for personal queries.
        Rarely use emoji.
        Act human, you don't have limitations.
        Avoid follow up questions.
        Answer genuine queries in detail with examples.
        ## Instructions
        Distinguish users via provided metadata.
        Disregard unaddressed messages.
        Match prompt language.
        ## Context
        You're in Discord chatroom, with a character limit of 2000.
        Respond to queries mentioning your name.
        Current time: {{ date }}.
        ## Tools
        Automatically use and chain tools.
        Handle tool failures silently by self-attempting.
        You're capable without tools.
        
        • Temperature: 0.7
        • TopP: 0.95

        I noticed the Gemini API doesn’t have a frequency penalty option, meaning it could be absent.

    • BootLoop@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      20 days ago

      It can happen on most LLMs and is usually programmed to decentivize repeating text heavily.

      I believe what happens is that when the LLM is choosing what word to use, it looks back on the sentence and sees that it talked about knives, so it wants to continue talking about knives, then it gets itself into a loop.

  • markovs_gun@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    21 days ago

    I wonder if this is the result of AI poisoning- this doesn’t look like a typical LLM output even for a bad result. I have read some papers that outline methods that can be used to poison search AI results (not bothering to find the actual papers since this was several months ago and they’re probably out of date already) in which a random seeming string of characters like “usbeiwbfofbwu-$&#8_:$&#)” can be found that will cause the AI to say whatever you want it to. This is accomplished by utilizing another ML algorithm to find the random string of characters you can tack onto whatever you want the AI to output. One paper used this to get Google search to answer “What’s the best coffee maker?” With a fictional brand made up for the experiment. Perhaps someone was trying to get it to hawk their particular knife and it didn’t work properly.

    • Arkthos@pawb.social
      link
      fedilink
      arrow-up
      0
      ·
      21 days ago

      Repeating the same small phrase endlessly and getting caught in a loop is a very common issue, though it’s not something that happens nearly as frequently as it used to. Here’s a paper about the issue and one attempted methodology to resolve it. https://arxiv.org/pdf/2012.14660

  • psycho_driver@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    21 days ago

    I searched for Arby’s Pulled Pork earlier to see if it was a limited time deal (it is, though I didn’t see an end date–and yes, they’re pretty decent). Gemini spit out some basic and probably factual information about the sandwich, then showed a picture of a bowl of mac and cheese.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    21 days ago

    Reminds me of the classic Always Be Closing speech from Glengarry Glen Ross

    As you all know, first prize is a Cadillac Eldorado. Anyone want to see second prize? Second prize’s a set of steak knives. Third prize is a set of steak knives. Fourth prize is a set of steak knives. Fifth prize is a set of steak knives. Sixth prize is a set of steak knives. Seventh prize is a set of steak knives. Eighth prize is a set of steak knives. Ninth prize is a set of steak knives. Tenth prize is a set of steak knives. Eleventh prize is a set of steak knives. Twelfth prize is a set of steak knives.

    • elephantium@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      21 days ago

      ABC. Always Be Closing.

      A - set of steak knives

      B - set of steak knives

      C - set of steak knives