• N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI

    Preying on the vulnerable is a feature, not a bug.

    • Vespair@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      And it’s beyond obvious in the way LLMs are conditioned, especially if you’re used them long enough to notice trends. Where early on their responses were straight to the point (inaccurate as hell, yes, but that’s not what we’re talking about in this case) today instead they are meandering and full of straight engagement bait - programmed to feign some level of curiosity and ask stupid and needless follow-up questions to “keep the conversation going.” I suspect this is just a way to increase token usage to further exploit and drain the whales who tend to pay for these kinds of services, personally.

      There is no shortage of ethical quandaries brought into the world with the rise of LLMs, but in my opinion the locked-down nature of these systems is one of the most problematic; if LLMs are going to be the commonality it seems the tech sector is insistent on making happen, then we really need to push back on these companies being able to control and guide them in their own monetary interests.

    • Tylerdurdon@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I kind of see it more as a sign of utter desperation on the human’s part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow’s experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.

      • graphene@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        And the amount of connections and friends the average person has has been in free fall for decades…

        • trotfox@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          I dunno. I connected with more people on reddit and Twitter than irl tbh.

          Different connection but real and valid nonetheless.

          I’m thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.

      • Paragone@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        That utter-desparation is engineered into our civilization.

        What happens when you prevent the “inferiors” from having living-wage, while you pour wallowing-wealth on the executives?

        They have to overwork, to make ends meet, is what, which breaks parenting.

        Then, when you’ve broken parenting for a few generatios, the manufactured ocean-of-attachment-disorder manufactures a plethora of narcissism, which itself produces mass-shootings.

        2024 was down 200 mass-shootings, in the US of A, from the peak of 700/year, to only 500.

        You are seeing engineered eradication of human-worth, for moneyarchy.

        Isn’t ruling-over-the-destruction-of-the-Earth the “greatest thrill-ride there is”?

        We NEED to do objective calibration of the harm that policies & political-forces, & put force against what is actually harming our world’s human-viability.

        Not what the marketing-programs-for-the-special-interest-groups want us acting against, the red herrings…

        They’re getting more vicious, we need to get TF up & begin fighting for our species’ life.

        _ /\ _

      • MouldyCat@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        a sign of utter desperation on the human’s part.

        Yes it seems to be the same underlying issue that leads some people to throw money at only fans streamers and such like. A complete starvation of personal contact that leads people to willingly live in a fantasy world.

    • Deceptichum@quokk.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      These same people would be dating a body pillow or trying to marry a video game character.

      The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.

        • BradleyUffner@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          If you are dating a body pillow, I think that’s a pretty good sign that you have taken a wrong turn in life.

          • NostraDavid@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            What if it’s either that, or suicide? I imagine that people who make that choice don’t have a lot of choice. Due to monetary, physical, or mental issues that they cannot make another choice.

            • BradleyUffner@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              8 days ago

              I’m confused. If someone is in a place where they are choosing between dating a body pillow and suicide, then they have DEFINITELY made a wrong turn somewhere. They need some kind of assistance, and I hope they can get what they need, no matter what they choose.

              I think my statement about “a wrong turn in life” is being interpreted too strongly; it wasn’t intended to be such a strong and absolute statement of failure. Someone who’s taken a wrong turn has simply made a mistake. It could be minor, it could be serious. I’m not saying their life is worthless. I’ve made a TON of wrong turns myself.

              • liv@lemmy.nz
                link
                fedilink
                English
                arrow-up
                0
                ·
                5 days ago

                Trouble is your statement was in answer to @morrowind@lemmy.ml’s comment that labeling lonely people as losers is problematic.

                Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to “cause” it.

      • tiguwang@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Me and Serana are not just in love, we’re involved!

        Even if she’ s an ancient vampire.

    • NostraDavid@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      That was clear from GPT-3, day 1.

      I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time…

  • flamingo_pinyata@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

    But then there’s people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Yeah, the more I use it, the more I regret asking it for assistance. LLMs are the epitome of confidentiality incorrect.

      It’s good fun watching friends ask it stuff they’re already experienced in. Then the pin drops

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      In some ways, it’s like Wikipedia but with a gigantic database of the internet in general (stupidity included). Because it can string together confident-sounding sentences, people think it’s this magical machine that understands broad contexts and can provide facts and summaries of concepts that take humans lifetimes to study.

      It’s the conspiracy theorists’ and reactionaries’ dream: you too can be as smart and special as the educated experts, and all you have to do is ask a machine a few questions.

    • glitchdx@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      The fact that it’s not a person is a feature, not a bug.

      openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don’t like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very “fellow kids” energy. It’s also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      At first glance I thought you wrote “inmate objects”, but I was not really relieved when I noticed what you actually wrote.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

      Talking with actual people online isn’t much better. ChatGPT might sound robotic, but it’s extremely polite, actually reads what you say, and responds to it. It doesn’t jump to hasty, unfounded conclusions about you based on tiny bits of information you reveal. When you’re wrong, it just tells you what you’re wrong about - it doesn’t call you an idiot and tell you to go read more. Even in touchy discussions, it stays calm and measured, rather than getting overwhelmed with emotion, which becomes painfully obvious in how people respond. The experience of having difficult conversations online is often the exact opposite. A huge number of people on message boards are outright awful to those they disagree with.

      Here’s a good example of the kind of angry, hateful message you’ll never get from ChatGPT - and honestly, I’d take a robotic response over that any day.

      I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

      • musubibreakfast@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Hey buddy, I’ve had enough of you and your sensible opinions. Meet me in the parking lot of the Wallgreens on the corner of Coursey and Jones Creek in Baton Rouge on april 7th at 10 p.m. We’re going to fight to the death, no holds barred, shopping cart combos allowed, pistols only, no scope 360, tag team style, entourage allowed.

      • pinkfluffywolfie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        I agree with what you say, and I for one have had my fair share of shit asses on forums and discussion boards. But this response also fuels my suspicion that my friend group has started using it in place of human interactions to form thoughts, opinions, and responses during our conversations. Almost like an emotional crutch to talk in conversation, but not exactly? It’s hard to pin point.

        I’ve recently been tone policed a lot more over things that in normal real life interactions would be light hearted or easy to ignore and move on - I’m not shouting obscenities or calling anyone names, it’s just harmless misunderstandings that come from tone deafness of text. I’m talking like putting a cute emoji and saying words like silly willy is becoming offensive to people I know personally. It wasn’t until I asked a rhetorical question to invoke a thoughtful conversation where I had to think about what was even happening - someone responded with an answer literally from ChatGPT and they provided a technical definition to something that was apart of my question. Your answer has finally started linking things for me; for better or for worse people are using it because you don’t receive offensive or flamed answers. My new suspicion is that some people are now taking those answers, and applying the expectation to people they know in real life, and when someone doesn’t respond in the same predictable manner of AI they become upset and further isolated from real life interactions or text conversations with real people.

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 days ago

          I don’t personally feel like this applies to people who know me in real life, even when we’re just chatting over text. If the tone comes off wrong, I know they’re not trying to hurt my feelings. People don’t talk to someone they know the same way they talk to strangers online - and they’re not making wild assumptions about me either, because they already know who I am.

          Also, I’m not exactly talking about tone per se. While written text can certainly have a tone, a lot of it is projected by the reader. I’m sure some of my writing might come across as hostile or cold too, but that’s not how it sounds in my head when I’m writing it. What I’m really complaining about - something real people often do and AI doesn’t - is the intentional nastiness. They intend to be mean, snarky, and dismissive. Often, they’re not even really talking to me. They know there’s an audience, and they care more about how that audience reacts. Even when they disagree, they rarely put any real effort into trying to change the other person’s mind. They’re just throwing stones. They consider an argument won when their comment calling the other person a bigot got 25 upvotes.

          In my case, the main issue with talking to my friends compared to ChatGPT is that most of them have completely different interests, so there’s just not much to talk about. But with ChatGPT, it doesn’t matter what I want to discuss - it always acts interested and asks follow-up questions.

          • pinkfluffywolfie@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            I can see how people would seek refuge talking to an AI given that a lot of online forums have really inflammatory users; it is one of the biggest downfalls of online interactions. I have had similar thoughts myself - without knowing me strangers could see something I write as hostile or cold, but it’s really more often friends that turn blind to what I’m saying and project a tone that is likely not there to begin with. They used to not do that, but in the past year or so it’s gotten to the point where I frankly just don’t participate in our group chats and really only talk if it’s one-one text or in person. I feel like I’m walking on eggshells, even if I were to show genuine interest in the conversation it is taken the wrong way. That being said, I think we’re coming from opposite ends of a shared experience but are seeing the same thing, we’re just viewing it differently because of what we have experienced individually. This gives me more to think about!

            I feel a lot of similarities in your last point, especially with having friends who have wildly different interests. Most of mine don’t care to even reach out to me beyond a few things here and there; they don’t ask follow-up questions and they’re certainly not interested when I do speak. To share what I’m seeing, my friends are using these LLM’s to an extent where if I am not responding in the same manner or structure it’s either ignored or I’m told I’m not providing the appropriate response they wanted. This where the tone comes in where I’m at, because ChatGPT will still have a regarded tone of sorts to the user; that is it’s calm, non-judgmental, and friendly. With that, the people in my friend group that do heavily use it have appeared to become more sensitive to even how others like me in the group talk, to the point where they take it upon themselves to correct my speech because the cadence, tone and/or structure is not fitting a blind expectation I wouldn’t know about. I find it concerning, because regardless of the people who are intentionally mean, and for interpersonal relationships, it’s creating an expectation that can’t be achieved with being human. We have emotions and conversation patterns that vary and we’re not always predictable in what we say, which can suck when you want someone to be interested in you and have meaningful conversations but it doesn’t tend to pan out. And I feel that. A lot unfortunately. AKA I just wish my friends cared sometimes :(

            • Opinionhaver@feddit.uk
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              8 days ago

              I’m getting the sense here that you’re placing most - if not all - of the blame on LLMs, but that’s probably not what you actually think. I’m sure you’d agree there are other factors at play too, right? One theory that comes to mind is that the people you’re describing probably spend a lot of time debating online and are constantly exposed to bad-faith arguments, personal attacks, people talking past each other, and dunking - basically everything we established is wrong with social media discourse. As a result, they’ve developed a really low tolerance for it, and the moment someone starts making noises sounding even remotely like those negative encounters, they automatically label them as “one of them” and switch into lawyer mode - defending their worldview against claims that aren’t even being made.

              That said, since we’re talking about your friends and not just some random person online, I think an even more likely explanation is that you’ve simply grown apart. When people close to you start talking to you in the way you described, it often means they just don’t care the way they used to. Of course, it’s also possible that you’re coming across as kind of a prick and they’re reacting to that - but I’m not sensing any of that here, so I doubt that’s the case.

              I don’t know what else you’ve been up to over the past few years, but I’m wondering if you’ve been on some kind of personal development journey - because I definitely have, and I’m not the same person I was when I met my friends either. A lot of the things they may have liked about me back then have since changed, and maybe they like me less now because of it. But guess what? I like me more. If the choice is to either keep moving forward and risk losing some friends, or regress just to keep them around, then I’ll take being alone. Chris Williamson calls this the “Lonely Chapter” - you’re different enough that you no longer fit in with your old group, but not yet far enough along to have found the new one.

              • pinkfluffywolfie@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 days ago

                I think it has a unique influence that will continue to develop, but I don’t think LLM’s are the only influence to blame. There’s a lot that can influence this behavior, like the theory you’ve described. Off the top of my head, limerence is something that could be an influence. I know that it is common for people to experience limerence for things like video game characters, and sometimes they project expectations onto others to behave like said characters. Other things could be childhood trauma, glass child syndrome, isolation from peers in adolescence, asocial tendencies, the list is long I’d imagine.

                For me, self journey started young and never ends. It’s something that’s just apart of the human experience, relationships come and go, then sometimes they come back, etc. I will say though, with what I’m seeing with the people I’m talking about, this is a novel experience to me. It’s something that’s hard to navigate, and as a result I’m finding that it’s actually isolating to experience. Like I mentioned before, I can have one-one chats, and when I see them in person, we do activities and have fun! But if any level of discomfort is detected and the expectation is brought on. By the time I realize what’s happening they’re offering literal formatted templates on how to respond in conversations. Luckily it’s not everyone in our little herd that has this behavior, but the people that do this the most I know for sure utilize ChatGPT heavily for these types of dicussions only because they recommended me to start doing the same not too long ago. Nonetheless, I did like this discussion, it offers a lot of prospect in looking at how different factors influence our behavior with each other.

  • MuskyMelon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Same type of addiction of people who think the Kardashians care about them or schedule their whole lives around going to Disneyland a few times a year.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.

    That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.

    Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”

    • Siegfried@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      AI and ads… I think that is the next dystopia to come.

      Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.

      • proceduralnightshade@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        “Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”

        edit: how does this make you feel

        • captainlezbian@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          It makes me wish my government actually fucking governed and didn’t just agree with whatever businesses told them

      • cardfire@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.

        Would you like to order a delicious, refreshing Coke Zero™️?

        • ivanafterall ☑️@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 days ago

          I can see how targeted ads like that would be overwhelming. Would you like me to sign you up for a free 7-day trial of BetterHelp?

          • Dale@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            Your fear of constant data collection and targeted advertising is valid and draining. Take back your privacy with this code for 30% off Nord VPN.

      • glitchdx@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        that is not a thought i needed in my brain just as i was trying to sleep.

        what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks

    • LovableSidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 days ago

      Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let’s not think about that either. AI Bad!

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        I’ll bait. Let’s think:

        -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

        • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

        • now llm is asked about the topic and computes the answer string

        By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)

        If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

        • LovableSidekick@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          It’s more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that’s how you want to spend your time, hey knock yourself out.

      • starman2112@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        This is a salient point that’s well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It’s super easy to call out a bad research study and have it retracted. But you can’t just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they’re synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I remember thinking this when I was like 15. Every time they mentioned tech, wtf this is all wrong! Then a few other topics, even ones I only knew a little about, so many inaccuracies.

  • reksas@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    i can feel it too when I use it. that is why i use it only for trivial things if at all.

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        I use it to generate a little function in a programming language I don’t know so that I can kickstart what I need to look for.

      • Cracks_InTheWalls@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        There’s a few people I know who use it for boilerplate templates for certain documents, who then of course go through it with a fine toothed comb to add relevant context and fix obvious nonsense.

        I can only imagine there are others who aren’t as stringent with the output.

        Heck, my primary use for a bit was custom text adventure games, but ChatGPT has a few weaknesses in that department (very, very conflict adverse for beating up bad guys, etc.). There’s probably ways to prompt engineer around these limitations, but a) there’s other, better suited AI tools for this use case, b) text adventure was a prolific genre for a bit, and a huge chunk made by actual humans can be found here - ifdb.org, c) real, actual humans still make them (if a little artsier and moody than I’d like most of the time), so eventually I stopped.

        Did like the huge flexibility v. the parser available in most made by human text adventures, though.

      • tias@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        I use it many times a day for coding and solving technical issues. But I don’t recognize what the article talks about at all. There’s nothing affective about my conversations, other than the fact that using typical human expression (like “thank you”) seems to increase the chances of good responses. Which is not surprising since it matches the patterns that you want to evoke in the training data better.

        That said, yeah of course I become “addicted” to it and have a harder time coping without it, because it’s part of my workflow just like Google. How well would anybody be able to do things in tech or even life in general without a search engine? ChatGPT is just a refinement of that.

      • OhVenus_Baby@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        Compiling medical documents into one, any thing of that sort, summarizing, compiling, coding issues, it saves a wild amounts of time compiling lab results that a human could do but it would take multitudes longer.

        Definitely needs to be cross referenced and fact checked as the image processing or general response aren’t always perfect. It’ll get you 80 to 90 percent of the way there. For me it falls under the solve 20 percent of the problem gets you 80 percent to your goal. It needs a shitload more refinement. It’s a start, and it hasn’t been a straight progress path as nothing is.

      • Bilb!@lem.monster
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn’t like something I do, too bad. The genius AI knows better, and I only care about what it has to say.

  • Blazingtransfem98@discuss.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

    • Saleh@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      When your job is to shovel out garbage, because that is specifically required from you and not shoveling out garbage is causing you trouble, then you are more than reasonable to let the machine take care of it for you.

    • mycelium underground@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I’m pretty sure I can get better answers from the LLMs tied to Kagi search.

      AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a “search” engine like Google now).

      Don’t be the product.

      • BlackAura@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Depending on what I needed I remember using AltaVista, AskJeeves, Dogpile, and I feel like later on MetaCrawler or something like that (would search multiple search engines for you and ordered them scored based on platform and relevancy iirc?)

  • PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.