- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
I find it pretty interesting that kagi is rated as Terrible search engine, even ChatGPT preforms better.
I find it pretty interesting that kagi is rated as Terrible search engine, even ChatGPT preforms better.
Anyone else wonder if Dan Luu’s stuff is ever worth the read? Generally I’m interested in what he talks about and has to say, but every article/post of his gives me serious info dump vibes. And sure, I like deep dives and long form as much, even today, but I with his content I’m always feeling like I didn’t need to read all of this and that he just likes writing a lot. Anyone else? Not I didn’t bother reading this one because it definitely seemed not worth it.
Yet you bothered comprehensively shitting on his paper!
Yet you bothered comprehensively shitting on his paper!
I’m not shitting on it, just sharing my impression and arguably prejudice of his work and asking if anyone has shared or different perspectives. I’m very happy with the idea that his work is good and enjoyed by many (by all means he seems to have a healthily strong patreon following).
At least what I see with this experiment/article is that is overly verbose, he takes a long time to get to the point. And then when he does his methodology shows an experiment that cannot be verified. Even when something is “subjective” we can still draw conclusions from it if we set up proper non-subjective ways of evaluating the results we see (ie. Rubrics). The fact that he doesn’t really say what leads him to say in detail what is a “terrible/v. bad/bad/good result” is a massive red flag in his method.
After seeing that, I no longer read the rest of it. Any conclusions drawn from a flawed methodology are inherently fallacies or hearsay.
If in any case it is further explained in the article and that somehow refutes what I’ve postulated later on, then I would have to say that the article is poorly written.
All this to say… I agree with you, not worth the read.
The entire post is exact details for why he decides each rating for each query
No it isn’t. He for example evaluate that Kagi and marginalis get the same score if you have to read as far down as to the 10th result for Kagi, while marginalis has no answer. How is that the same score? There is no explanation. There is a lot of text, and then in the end he has made some subjective choices.