One of Google Search’s oldest and best-known features, cache links, are being retired. Best known by the “Cached” button, those are a snapshot of a web page the last time Google indexed it. However, according to Google, they’re no longer required.

“It was meant for helping people access pages when way back, you often couldn’t depend on a page loading,” Google’s Danny Sullivan wrote. “These days, things have greatly improved. So, it was decided to retire it.”

  • Monomate@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Ironically, the link to this as article is offline for me. “Cached” surely would solve my problem.

  • EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    How has no one worked on a new search engine over the last decade or so where Google has been on a clear decline in its flagship product!

    I know of the likes of DDG, and Bing has worked hard to catch up, but I’m genuinely surprised that a startup hasn’t risen to find a novel way of attacking reliable web search. Some will say it’s a “solved problem”, but I’d argue that it was, but no longer.

    A web search engine that crawls and searches historic versions of a web page could be an incredibly useful resource. If someone can also find a novel way to rank and crawl web applications or to find ways to “open” the closed web, it could pair with web search to be a genuine Google killer.

    • piecat@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      The next revolutionary search engine will be an AI that understands you. Like what a librarian is… Not just ads served.

      • spujb@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        i don’t need a search engine that understand me i need a search engine that finds sites and pages based on a string of text i provide it

        we should be calling the future piss the way it’s going down the toilet

        • piecat@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Well, at the least, you need something to filter out the shit trying to game seo. To me it seems that AI is the easiest approach.

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      There’s a lot of startups trying to make better search engines. Brave for example is one of them. There’s even one Lemmy user, but I forget what the name of theirs is.

      But it’s borderline impossible. In the old days, Google used webscrapers and key word search. When people started uploading the whole dictionary in white text on their pages, Google added some antispam and context logic. When that got beat, they handled web credibility by the number of “inlinks” from other websites. Then SEO came out to beat link farmers, and you know the rest from there.

      An indexable version of Archive.org is feasible, borderline trivial with ElasticSearch, but the problem is who wants that? Sure you want I may, but no one else cares. Also, let’s say you want to search up something specific - each page could be indexed, with slight differences, thousands of times. Which one will you pick? Maybe you’ll want to set your “search date” to a specific year? Well guess what, Google has that feature as well.

      • TWeaK@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Brave is not a business that should be supported. Also, I’m pretty sure they just use Bing for a back end.

        There are also a few paid search engines that people say are good.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            They’ve had a history of controversy over their life, ranging from replacing ads with their own affiliate links to bundling an opt-out crypto miner. Every time something like this happened, the CEO went on a marketing campaign across social media, effectively drowning out the controversial story with an influx of new users. The CEO meanwhile has got in trouble for his comments on same-sex marriage and covid-19.

            In general, it’s always seemed like it would take a very small sack of money for Brave to sell out its users. Also, their browser is Chromium based, so it’s still contributing to Google’s market dominance and dictatorial position over web technologies.

    • sgtgig@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Bing’s copilot is genuinely pretty good, the AI answer is often pretty accurate and the way it’s able to weave links into its answer is handy. I find it way more useful than Google search these days and I’m pretty much just using it on principle as Google is just pissing me off with killing their services, a few of which I’ve used.

      I don’t think Microsoft is some saint but copilot is just a good product.

    • mlg@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago
      • Google invents, invests, or previously invested into some ground breaking technology
      • They buy out competition and throw tons of effort into making superior product
      • Eventually Google becomes defacto standard
      • Like a few years pass
      • Google hands off project to fresh interns to reduce the crap out of the cloud usage to decrease cost
      • Any viable alternatives are immediately bought out by Google
      • Anything left over is either struggling FOSS or another crappy corporate attempt (cough cough Microsoft)
      • Repeat

      My favorite case in point being Google Maps.

    • AAA@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Yes, that would be a Google killer. If you somehow find the money to provide it for free.

      Finding a novel way of searching is one thing. Finding a novel way of financing the whole endeavor (and not going the exact route Google is) is another.

    • gunslingerfry@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I recommend Kagi. Bought a family plan and it feels like I’ve gone back to 2016 when the search engines weren’t a dumpster fire.

  • Toes♀@ani.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    That’s bs, it’s one of the best features Google has and they’ve been ruining it. Wayback machine wished it could be that comprehensive.

  • Chris@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I find this very useful to read paywalled articles that Google has managed to index!

    OK, I see why they might want to get rid of it.

  • rhabarba@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    These days, things have greatly improved.

    Websites will never change their URLs today.

    • ares35@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      i maintain redirects for old URLs for which the content still exists at another address. i’ve been doing that since i started working on web sites 20-some years ago. not many take the time to do that, but i do. so there’s at least a few web sites out there that if you have a 20 year old bookmark to, chances are it still works.

    • Neato@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I can’t imagine there was even that much lost revenue. Cached pages are good for seeing basic content in that page but you can’t click through links or interact with the page in any way. Were so many people using it to avoid ads?

      • bjorney@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        I feel like 99% of its usage was to avoid ads/paywalls/geo/account restrictions on news and social media sites

      • db2@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        but you can’t click through links or interact with the page in any way

        Most of the time that’s exactly what I want. I hate hunting through 473 pages of stupid bullshit in some janky forum to try to find the needle in that haystack.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        It’s a feature they maintain that doesn’t bring in money. I’m sure that’s the logic.

      • NoRodent@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        Were so many people using it to avoid ads?

        I doubt that as well. There are much better ways to deal with ads. I always only used it when the content on the page didn’t exist anymore or couldn’t be accessed for whatever reason.

        But I suspected this was coming, they’ve been hiding this feature deeper and deeper in the last few years.

  • zkfcfbzr@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Was it even still around? I can think of a few times in the past few months where I’ve tried to find the cached link to a google result and failed. Most recently just two days ago, when a site I wanted to use was down for maintenance.

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Cached pages haven’t worked on many sites for several years already.

    And for specific types of sites, it 100% still is needed and a great tool.

  • TwilightVulpine@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Ironically just yesterday I needed Google Cache because a page I needed to read was down and I couldn’t find the option anymore.

    Are we going to need to go back to personal web crawlers to back-up information we need? I hate today’s internet.

      • Kbobabob@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        No fucking way I’m paying a subscription to search something on the Internet. 5$ for 300 searches, lol.

        • BananaOnionJuice@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          I have been looking at kagi but their pricing is definitely made to force people to buy the professional $10 package.

          100 or even 300 searches/day would be unusable for me, you quickly spend 10 searches refining a query for something special, and when developing you do like 5-10 searches/hour.

          A fair pricing model would be

          • $2/month for 1000 searches/day
          • $5/month for 5000 searches/day
          • $10/month for unlimited everything
        • pivot_root@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Paying for the Reddit API would be cheaper. That’s an impressively overpriced search engine.

        • 1984@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Oh shit, it’s 5 dollars? That’s like… A cup of coffee. You are right, way too much, so much money.

      • killeronthecorner@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Ad based search engines make almost $300 a year off their users

        What disingenuous phrasing.

        I’d be up for using a product like this, but their popcorn pricing and snark is really off-putting, so I’ll never be using this service.

  • NoRodent@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    By they way, I just found out that they removed the button, but typing cache:www.example.com into Google still redirects you to the cached version (if it exists). But who knows for how long. And there’s the question whether they’ll continue to cache new pages.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I hope they only kill the announced feature but keep the cache part.
      Just today I had to use it because some random rss aggregator website had the search result I wanted but redirected me somewhere completely different…

    • _number8_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      they’ve broken / ignored every modifier besides site: in the last few years, god knows how long that’ll work

      • Raiderkev@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Quotes are fucking awful now. You have to change the search terms to verbatim now which takes way fucking longer. Google has enshittified almost everything. I’m just waiting for them to ruin Maps.

        • Flax@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Remember when Google Now was intelligently selected data and not an endless scroll of paywalled news articles?

    • AnAngryAlpaca@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      My guess is that a cached page is just a byproduct when the page is indexed by the crawler. The need a local copy to parse text, links etc. and see the difference to the previous page.