This technology isn’t actually that new, it’s been around for almost a decade. What’s new is the amount of processing power they have to throw at the data bases and the level of data collection, but you’re just buying into marketing hype. It’s classic tech industry stuff to over promise and under deliver to pump up valuations and sales.
Ok, but by that same perspective, you could say convolutional neural networks have been around since the 80s. It wasn’t until Geoffrey Hinton put them back on the map in 2012ish that anyone cared. GPT2 is when I started paying attention to LLMs, and that’s 5 years old or so.
Even a decade is new in the sense of Laplace’s law of succession alone indicating there’s still a 10% chance we’ll solve the problem in the next year.
Laplace’s law of succession only applies if we know an experiment can result in either success or failure. We don’t know that. That’s just adding new assumptions for your religion. For all we know, this can never result in success and it’s a dead end.
I have to hard disagree here. Laplace’s law of succession does not require that assumption. It’s easy to see why intuitively: if it turns out the probability is 0 (or 1) then the predicted probability from Laplace’s law of succession limits to 0 (or 1) as more results come in.
It may help to distinguish between the “true” probability of an event and the observer’s internal probability for that event. If the observer’s probability is 0 or 1 then you’re right, it can never change. This is why your prior should never be 0 or 1 for anything.
A better example of this is “how sure are you that 2+2=4 ?” It makes sense to assign a prior probability of 1 to such mathematical certainties, because they don’t depend on our uncertain world. On the other hand, how sure are you that 8858289582116283904726618947467287383847 isn’t prime?
For a die in a thought experiment – sure, it can’t be 7. But in a physical universe, a die could indeed surprise you with a 7.
More to the point, why do you believe the probability that hallucinations as a problem will be solved (at least to the point that they are rare and mild enough not to matter) is literally 0? Do you think that the existence of fanatical AI zealots makes it less likely?
Okay, so by your logic the probability of literally everything is 1. That’s absurd and that’s not how Laplace’s law of succession is supposed to be applied. The point I’m trying to make is that some things are literally impossible, you can’t just hand-wave that!
And I’m not saying that solving hallucinations is impossible! What I’m saying that it could be impossible and am criticizing your blind faith in progress because you just believe the probability is literally 1. I can’t say, for sure, that it’s impossible. At the same time you can’t say, for sure, that it is possible. You can’t just assume the problem will inevitably be fixed, otherwise you’ve talked yourself into a cult.
This technology isn’t actually that new, it’s been around for almost a decade. What’s new is the amount of processing power they have to throw at the data bases and the level of data collection, but you’re just buying into marketing hype. It’s classic tech industry stuff to over promise and under deliver to pump up valuations and sales.
Ok, but by that same perspective, you could say convolutional neural networks have been around since the 80s. It wasn’t until Geoffrey Hinton put them back on the map in 2012ish that anyone cared. GPT2 is when I started paying attention to LLMs, and that’s 5 years old or so.
Even a decade is new in the sense of Laplace’s law of succession alone indicating there’s still a 10% chance we’ll solve the problem in the next year.
Laplace’s law of succession only applies if we know an experiment can result in either success or failure. We don’t know that. That’s just adding new assumptions for your religion. For all we know, this can never result in success and it’s a dead end.
I have to hard disagree here. Laplace’s law of succession does not require that assumption. It’s easy to see why intuitively: if it turns out the probability is 0 (or 1) then the predicted probability from Laplace’s law of succession limits to 0 (or 1) as more results come in.
If the probability is 0 then it will never be 1
Therefore, there must be some probability of success.
It may help to distinguish between the “true” probability of an event and the observer’s internal probability for that event. If the observer’s probability is 0 or 1 then you’re right, it can never change. This is why your prior should never be 0 or 1 for anything.
For anything? Are you sure about that?
Because I say there’s 0 probability that six sided dice will ever produce a 7.
A better example of this is “how sure are you that 2+2=4 ?” It makes sense to assign a prior probability of 1 to such mathematical certainties, because they don’t depend on our uncertain world. On the other hand, how sure are you that 8858289582116283904726618947467287383847 isn’t prime?
For a die in a thought experiment – sure, it can’t be 7. But in a physical universe, a die could indeed surprise you with a 7.
More to the point, why do you believe the probability that hallucinations as a problem will be solved (at least to the point that they are rare and mild enough not to matter) is literally 0? Do you think that the existence of fanatical AI zealots makes it less likely?
Okay, so by your logic the probability of literally everything is 1. That’s absurd and that’s not how Laplace’s law of succession is supposed to be applied. The point I’m trying to make is that some things are literally impossible, you can’t just hand-wave that!
And I’m not saying that solving hallucinations is impossible! What I’m saying that it could be impossible and am criticizing your blind faith in progress because you just believe the probability is literally 1. I can’t say, for sure, that it’s impossible. At the same time you can’t say, for sure, that it is possible. You can’t just assume the problem will inevitably be fixed, otherwise you’ve talked yourself into a cult.