Is Google signaling the end of the open web? That’s some of the concern raised by its new embrace of AI. While most of the fears about AI may be overblown, this one could be legit. But it doesn’t mean that we need to accept it.
These days, there is certainly a lot of hype and nonsense about artificial intelligence and the ways that it can impact all kinds of industries and businesses. Last week at Google IO, Google made it clear that they’re moving forward with what it calls “AI overviews,” in which Google’s own Gemini AI tech will try to generate answers at the top of search pages.
“Managed Decline” That’ll be big.
*Still, as the first day of I/O wound down, it was hard to escape the feeling that the web as we know it is entering a kind of managed decline. Over the past two and a half decades, Google extended itself into so many different parts of the web that it became synonymous with it. And now that LLMs promise to let users understand all that the web contains in real time, Google at last has what it needs to finish the job: replacing the web, in so many of the ways that matter, with itself. *
I had actually read this article the day it came out, but I didn’t think too much of that paragraph until a couple days later at a dinner full of folks working on decentralization. Someone brought up that quote, though paraphrased it slightly differently, claiming Casey was saying that Google was actively putting* the web into managed decline*.
Whether or not that’s very different (and maybe it’s not), both should spark people to realize that this is a problem.
Our bots have sucked up everyone’s sites, so screw your web we got it all at the Evil Store.
They want to make it TV
This is silly. The web is not in decline and Google is not at fault. Most of the web is garbage, and Google helps us find the information buried in a sea of ads and repetitively copied/reworded content. We are moving to a world where putting up a plagiarized page with tons of ads will not be profitable. The sooner that day arrives, the better.
and you know why all the copying/rewording or why all recipes include three paragraphs about how the author first discovered the dish during their trip to Toscania in 1997? because they’re trying to game Google’s requirements and get included in their search results.
https://www.theverge.com/c/23998379/google-search-seo-algorithm-webpage-optimization
After DuckDuckGo, Ecosia, StartPage, Qwant, and a bunch of other Bing frontends died today for a few hours, I also believe this shows how important decentralisation is. They all refused to build their own indices and instead feed Bing with data.
The only real effort I’ve seen at decentralised search has been YaCy, and that’s been one single maintainer it seems. Nobody seems to care about it enough to contribute. All these new “privacy respecting search engines” are just Bing proxies that are trying to fill a demand for the market segment of anti-google people by doing the minimum amount of work possible. They could all be contributing to YaCy or an alternative. This recent incident won’t convince them otherwise.