I typed all this up for someone who posted a… very strangely written question regarding something they noticed with AI, but it appears to be deleted/removed… and, well, I wanna know if I got their question rephrased in a less… difficult to understand format. And then the answer to said question, because I find it interesting as well.

What I typed in response:

After parsing the insanity that is your writing style and… English as a second language? Allow me to confirm and summarize, because I find this question fascinating.

You’ve come across a LLM trend associated with said LLM being given instruction to describe/pretend to be a human named Delilah. LLM’s have gone viral at times for being instructed to formulate their output to sound like famous people with what appears to be resonable accuracy. But what goes into that ability is human words previously written associated with that person (or rather, their full name/titles/etc), as well as purposful restrictions given to the LLM directly (like, don’t output the N word).

Another lesser/totally unquantifiable factor in the output’s “tone” is result of errors in the blackbox algorithm that associates the “words” (not truly words I know, but essentially) in ways that aren’t what you’d expect.

(Here’s where my slight confusion mostly is) Each of these “factors” associated with the tone of the output… you’ve given names to? Or maybe my entirely self-researched knowledge has missed an agreed-upon naming system for these “characters”? I’m not quite sure.

And now your question and qualifers : Is there a pop culture/historic person or character named Delilah who is associated with furry stuff? Because you have been looking at some of the interesting mistaken/innacurate tones adopted by a LLM, and you’ve noticed that asl the LLM to output as if it was Delilah, and the results are furry related. And typically this sort of issue is mostly due to overlapping/similar names in the model’s training (as well as much stranger links without any explanation as to how they formed). And you’re research on “Delilah” hasn’t turned up anything giving reason for the LLM’s furry related output

… is that more or less what you are saying?

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I have nothing to say. I can’t read past the insults to start. I think I’m in the wrong place here ATM. This has been a depressing experience I do not wish to continue.

    • kora@lemmy.blahaj.zoneOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Thats unfortunate. I’m sorry that has happened to you, and totally understand you no longer wanted to continue the discussion.

      Would you like for me to take down my post? I am interested myself, but lack the context/cause to adequately ask the question, but will totally be ok taking if down if you’d like.

      Edit: and in case I’ve misunderstood, and its my posts content you’ve found to be insulting… all I can say is that it wasn’t my intent, and I’m sorry if that is the case. I did find your post difficult, but I believe I’ve only said anything that might be taken as negative in reference to the post itself and not you in any way.

    • Rayquetzalcoatl@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I don’t think anybody insulted you mate - you wrote something that was quite hard to understand, and we were curious about what you meant. OP has clearly put quite a lot of effort into rephrasing and trying to understand your post - that’s as far from an insult as I can imagine.

      • kora@lemmy.blahaj.zoneOP
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Thanks for recognizing the effort.

        But in the brief look at replies that I gave, it did appear to be rather unkind, and certainly missing any reply of use. I wouldn’t be surprised that it only got worse in the time I took to make my comment (me be slow typist), and can’t say I blame OP for being upset.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    If you post strangely written questions on social media, you probably also type in strangely written text into AI. And in turn the AI will be confused and generate some random text. For example about furries or some other random topic. If it’s an AI service that’s made for erotic roleplay, it’ll be more likely than if you tried the same thing with ChatGPT.

    You should ask this question in one of the AI communities, however. And not on No stupid questions.

    And it’s better not to use derogatory language.

    • kora@lemmy.blahaj.zoneOP
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The user gave no reason to assume anything of that, nor did my description of the post, and may find the suggestion upsetting. Not going to go all PC 5-0 om you, but did want to distance myself from said assumption.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Sorry, I’m also not a native speaker. I don’t know what PC 5-0 means (political correctness police??). But if we want to know what happened, we need to know the circumstances. It’ll be a big difference which exact LLM model got used. We need to know the exact prompt and text that went in. And then we can start discussing why something happened. I’d say a good chance is the LLM has been made to output stories like that. Like it’s the case with LLM models that have been made for ERP. That’s why I said that.

        • kora@lemmy.blahaj.zoneOP
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Oh, and PC 5-0

          PC - politically correct (a very… wide term)

          5-0 is a colloquial term meaning Police.

          Idk how non-native English your internet consumption is, but just straight up saying PC Police… is just something I’d like to not continue the use of.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Alright. Thx for the explanation. Yeah, I don’t have a filter. I just say whatever I think. Don’t really care if it’s offensive, just if things are true or not. Which is hard to tell in this case, since we don’t have enough information at hand. And LLMs are complex. Could be a fluke. Or whatever.

        • kora@lemmy.blahaj.zoneOP
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Oh, Hmmm, thats a rather interesting route I didn’t think to go down. Most of my interests amd consumed content on AI has been through videos/explanations by people much smarter than I, and not really through use of any LLM’s in any sort of manner except a few exchanges with a few of OpenAi’s models over the last few years. Didn’t even consider that those sorts of things were a common thing.

          My limited LLM knowledge does lead me to believe that both interpretations of the question would more or less boil down to the same thing though. A little search engine hunting of my own has also come up empty, and I’m curious if this one of those super interesting and crazy associated token relationships, or if there is just a crapload of content I can’t find.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            I don’t think it’s necessary to distance oneself from doing said roleplay. I bet society is looking down on individuals doing it. But I think it’s perfectly fine. As long as it stays somewhat healthy and no one gets harmed.

            There is a considerable group of people who do roleplay with AI. Or have “virtual girlfriends” or companions. It all started with Replika AI. Nowadays there are other services for that. And these LLMs are made to be lewd and suggestive. Including all kinds of niche interests. You’ll find several articles about it if you google virtual grilfrends or AI companions. It’s more or less being discussed in some niche areas of the internet, since there is a stigma to it.

            • kora@lemmy.blahaj.zoneOP
              link
              fedilink
              arrow-up
              0
              ·
              1 month ago

              Oh, I’d have no shame using that kind of thing, FFS I think having a fursona seems fun and liberating if not for the horrible amount of sweat that has gotta be involved.

              I just was trying to say I made my attempt at rephrasing based on not knowing those were really a thing, and that additional possibility/context might have adjusted what I remember reading.

              I refuse to Ick anyone’s consensual yum, even the really far out there stuff that isn’t for me, and I hate when others do. Being a transwoman, I’m no stranger to being reduced to a fetish to be icked.

              Fuck that shit, and do it in a furry suit if you want lol