I’m confused a little by the LLM’s and datasets here.
OpenAI and Reddit have their partnership for training, so I would have assumed the pizza glue answer would have come from an OpenAI result, but Google doesn’t use OpenAI. How did Google give an answer that was clearly scraped from Reddit?
But I don’t think it’s just an issue with the dataset. It’s the false promise of these LLMs having a fucking clue what a good search result is and what is not. They don’t. They are just good at creating text that sounds plausible. That’s not what searching for factually correct information is about though.
I’m confused a little by the LLM’s and datasets here.
OpenAI and Reddit have their partnership for training, so I would have assumed the pizza glue answer would have come from an OpenAI result, but Google doesn’t use OpenAI. How did Google give an answer that was clearly scraped from Reddit?
Google has a deal with reddit as well. https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/?utm_source=reddit.com
But I don’t think it’s just an issue with the dataset. It’s the false promise of these LLMs having a fucking clue what a good search result is and what is not. They don’t. They are just good at creating text that sounds plausible. That’s not what searching for factually correct information is about though.