• Catoblepas@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This is why Melon and the AI chud brigade are so obsessed with having a chatbot (sorry, “AI”) that always agrees with them: a stupid number of people think LLMs are search engines, or worse, search engines but better, some diviner of truth.

    • SkyNTP@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Information is not truth. A do or die slogan for the 21st century.

      • _cryptagion [he/him]@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yes, however, using a public SearXNG instance makes your searches effectively private, since it’s the server doing them, not you. It also does not use generative AI to produce the results, and won’t until or unless the ability for normal searches is removed.

        And at that point, you can just disable that engine for searching.

        • leanleft@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          from a privacy perspective…
          you might as well use a vpn or tor. same thing.

          • _cryptagion [he/him]@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Yes, but that’s not the only benefit to it. It’s a metasearch engine, meaning it searches all the individual sites you ask for, and combines the results into one page. This makes it more akin to DDG, but it doesn’t just use one search provider.

            • leanleft@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              it’s a fantastic metasearch engine. but also people frequently dont configure it to its max potential IMO . one common mishap is the frequent default setting of sending queries to google. 💩

  • BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    The Internet was a great resource for sharing and pooling human knowledge.

    Now generative AI has come along to dilute knowledge in a great sea of excrement. Humans have to hunt through the shit to find knowledge.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I mean google was already like this before GenAI.

      Its a nightmare to find anything you’re actually looking for and not SEO spam.

      Gen AI cuts out some of that noise but it has its own problems too.

      • JeremyHuntQW12@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        You should see what searching was like on AltaVista. You’d have to scroll past dozens of posts of random numbers and letters to find anything legible. Click through and your computer would emit a cacophony of bell sounds and pour out screens of random nonsense and then freeze permanently. You had to rely on links and web-rings to navigate with any degree of success.

        And that in itself was a massive improvement on what was available before.

        • criss_cross@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Oh yeah I remember the AltaVista, Lycos, Ask Jeeves, and Dogpile days. I agree searxh has come a long way. I’m just saying Google used to be better in that old sweet spot.

    • GaiusBaltar@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      To be fair, humans were already diluting it in a great sea of excrement, the robots just came to take our job and do it even faster and better.

    • Roflmasterbigpimp@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The Internet was a great resource for sharing and pooling human knowledge.

      Bruh did you ever went to 4chan or Reddit? The Internet turned to a dumpster fire long time before AI.

        • Roflmasterbigpimp@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          It’s still part of the Internet, if you can just pick and choose what Parts we are talking about, then the Internet ist still fine 🥸

  • lightnsfw@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Eh…I got it to find a product that met the specs I was looking for on Amazon when no other search worked. It’s certainly a last resort but it worked. Idk why whenever I’m looking to buy anything lately somehow the only criteria I care about are never documented properly…

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yea. It was reading the contents of the item description I think. In this instance I was looking for an item with specific dimensions and just searching those didn’t work because Amazon sellers are ass at naming shit and it returned a load of crap. but when I put them in their AI thing it pulled several matches right away.

      • lightnsfw@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I mean, it gave me exactly what I asked for. The only further research was to actually read the item description to verify that but I could have blindly accepted it and received what I was looking for.

  • Irdial@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    In general I agree with the sentiment of the article, but I think the broader issue is media literacy. When the Internet came about, people had similar reservations about the quality of information, and most of us learned in school how to find quality information online.

    LLMs are a tool, and people need to learn how to use them correctly and responsibly. I’ve been using Perplexity.AI as a search engine for a while now, and I think they’re taking the right approach. It employs LLMs at different stages to parse your query, perform web searches on your behalf, and summarize findings. It provides in-text citations as well, which is an opportunity for a media-literate person to confirm the validity of anything important.

    • captainlezbian@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Ok but may I point you to the reality that internet spread misinformation is a critically bad problem at the moment

  • HEXN3T@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’ve used it for very, very specific cases. I’m on Kagi, so it’s a built in feature (that isn’t intrusive), and it typically generates great answers. That is, unless I’m getting into something obscure. I’ve used it less than five times, all in all.

  • curiousaur@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Who else is going to aggregate those recipes for me without having to scroll past ads a personal blog bs?

    • Knoxvomica@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      So I rarely splurge on an app but I did splurge on AntList on Android because they have a import recipe function. Also allows you to get paywall blocked recipes if you are fast enough.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Thd fuck do you mean without telling? I am very explicitly telling you that I don’t use them, and I’m very openly telling you that you also shouldn’t

          • curiousaur@reddthat.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            I use them hundreds of times daily. I’m 3-5x more productive thanks to them. I’m incorporating them into the products I’m building to help make others who use the platform more productive.

            Why the heck should I not use them? They are an excellent tool for so many tasks, and if you don’t stay on top of their use, in many fields you will fall irrecoverably behind.

    • bradd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      There was a project a few years back that scrapped and parsed, literally the entire internet, for recipes, and put them in an elasticsearch db. I made a bomb ass rub for a tri-tip and chimichurri with it that people still talk about today. IIRC I just searched all tri-tip rubs and did a tag cloud of most common ingredients and looked at ratios, so in a way it was the most generic or average rub.

      If I find the dataset I’ll update, I haven’t been able to find it yet but I’m sure I still have it somewhere.

  • lemmylommy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    No. Learn to become media literate. Just like looking at the preview of the first google result is not enough blindly trusting LLMs is a bad idea. And given how shitty google has become lately ChatGPT might be the lesser of two evils.

    • Ech@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      No.

      Yes.Using chatgpt as a search engine showcases a distinct lack of media literacy. It’s not an information resource. It’s a text generator. That’s it. If it lacks information, it will just make it up. That’s not something anyone should use as any kind of tool for learning or researching.

      • lemmylommy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Well, inside that text generator lies useful information, as well as misinformation of course, because it has been trained on exactly that. Does it make shit up? Absolutely. But so do and did a lot of google or bing search results, even prior to the AI-slop-content farm era.

        And besides that, it is a fancy text generator that can use tools, such as searching bing (in case of ChatGPT) and summarizing search results. While not 100% accurate the summaries are usually fairly good.

        In my experience the combination of information in the LLM, web search and asking follow up questions and looking at the sources gives better and much faster results than sifting though search results manually.

        As long as you don’t take the first reply as gospel truth (as you should not do with the first google or bing result either) and you apply the appropriate amount of scrutiny based on the importance of your questions (as you should always do), ChatGPT is far superior to a classic web search. Which is, of course, where media literacy matters.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Both the paid version of OpenAi and co-pilot are able to search the web if they don’t know about something.

        The biggest problem with the current models is that they aren’t very good at knowing when they don’t know something.

        The o1 preview actually solves this pretty well, But your average search takes north of 10 seconds.

        • lurklurk@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          They never know about something though. They are just text randomisers trained to generate plausible looking text

            • lurklurk@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              The problem isn’t that the model doesn’t know when it doesn’t know. The models never know. They’re text predictors. Sometimes the predictive text happens to be right, but the text predictor doesn’t know.

              • rumba@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                So, let me get this straight. It’s your purpose in life, to find anytime anyone mentions the word know in any form of context to butt into the conversation with no helpful information or context to the message at hand and point out that AI isn’t alive (which is obvious to everyone) and say it’s just a text predictor (which is misleading at best)? Can someone help me crowdsource this poor soul a hobby?

      • Gladaed@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        You ate wrong. It is incredibly useful if the thing you are trying to Google has multiple meanings, e.g. how to kill a child. LLMs can help you figure out more specific search terms and where to look.

        • Ech@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          LLMs can help you figure out more specific search terms and where to look.

          Not knowing how to use a search engine properly doesn’t mean these sites are better. It just means you have more to learn.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Generative AI is a tool, sometimes is useful, sometimes it’s not useful. If you want a recipe for pancakes you’ll get there a lot quicker using ChatGPT than using Google. It’s also worth noting that you can ask tools like ChatGPT for it’s references.

    • werefreeatlast@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      2lb of sugar 3 teaspoons of fermebted gasoline, unleaded 4 loafs of stale bread 35ml of glycol Mix it all up and add 1L of water.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Do you also drive off a bridge when your navigator tells you to? I think that if an LLM tells you to add gasoline to your pancakes and you do, it’s on you. Commom sense doesn’t seem very common nowdays.

        • werefreeatlast@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Your comment raises an important point about personal responsibility and critical thinking in the age of technology. Here’s how I would respond:

          Acknowledging Personal Responsibility

          You’re absolutely right that individuals must exercise judgment when interacting with technology, including language models (LLMs). Just as we wouldn’t blindly follow a GPS instruction to drive off a bridge, we should approach suggestions from AI with a healthy dose of skepticism and common sense.

          The Role of Critical Thinking

          In our increasingly automated world, critical thinking is essential. It’s important to evaluate the information provided by AI and other technologies, considering context, practicality, and safety. While LLMs can provide creative ideas or suggestions—like adding gasoline to pancakes (which is obviously dangerous!)—it’s crucial to discern what is sensible and safe.

          Encouraging Responsible Use of Technology

          Ultimately, it’s about finding a balance between leveraging technology for assistance and maintaining our own decision-making capabilities. Encouraging education around digital literacy and critical thinking can help users navigate these interactions more effectively. Thank you for bringing up this thought-provoking topic! It’s a reminder that while technology can enhance our lives, we must remain vigilant and responsible in how we use it.

          Related

          What are some examples…lol

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      It’s also worth noting that you can ask tools like ChatGPT for it’s references.

      last time I tried that it made up links that didn’t work, and then it admitted that it cannot reference anything because of not having access to the internet

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That’s my point, if the model returns a hallucinated source you can probably disregard it’s output. But if the model provides an accurate source you can verify it’s output. Depending on the information you’re researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.

        • 31337@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.