Exactly. As a SW engineer, I don’t know how far we are from an AGI exactly, but I am confident enough Altman and openAi have no idea where to even start.
In Venice he never claims that he will be the one to do these things, at least in that tweet he doesn’t claim to be the one that’s going to do those things.
I’m not sure what the line about creating new realities means. I assume he means VR and not that AI is going to give us the ability to access hyperspace or something.
As a software engineer that works in AI, the “breakthrough” we’ve made is in proving that LLM’s can perform well at scale, and that hallucinations aren’t as big a problem as initially thought. Most tech companies didn’t do what OpenAI did because hallucinations are brand-damaging, whereas OpenAI didn’t give a fuck. In the next few years, all existing AI systems will be through LLM’s, and probably as good at ChatGPT.
We might make more progress now that researchers and academics see the value in LLM’s, but my weakly held opinion is that it’s mostly surrounded by hype.
We’re nowhere near what most would call AGI, although to be blunt, I don’t think the average person on here could truly tell you what that looks like without disagreeing with AI researchers.
Exactly. As a SW engineer, I don’t know how far we are from an AGI exactly, but I am confident enough Altman and openAi have no idea where to even start.
That sounds like the general consensus from most SW engineers.
In Venice he never claims that he will be the one to do these things, at least in that tweet he doesn’t claim to be the one that’s going to do those things.
I’m not sure what the line about creating new realities means. I assume he means VR and not that AI is going to give us the ability to access hyperspace or something.
I read that and immediately thought of you working in a Star Wars hangar, fixing rebel ships
As a software engineer that works in AI, the “breakthrough” we’ve made is in proving that LLM’s can perform well at scale, and that hallucinations aren’t as big a problem as initially thought. Most tech companies didn’t do what OpenAI did because hallucinations are brand-damaging, whereas OpenAI didn’t give a fuck. In the next few years, all existing AI systems will be through LLM’s, and probably as good at ChatGPT.
We might make more progress now that researchers and academics see the value in LLM’s, but my weakly held opinion is that it’s mostly surrounded by hype.
We’re nowhere near what most would call AGI, although to be blunt, I don’t think the average person on here could truly tell you what that looks like without disagreeing with AI researchers.
We are so fucked.