Built on unearned hype.
If I had 10k to leverage I would be shorting the FUCK out of this, it’s a bubble and everyone knows it
I’d say still risky. They might perpetuate the bubble for longer, which means high risk of forced covering at loss.
Market can remain irrational, etc
Even if you 1000 x your $10k I don’t think you’d have enough to short something valued at $157B, however inflated that valuation might be.
You can short something without forcing a short squeeze…
You can short something with $1. It’s just not enough to affect the value itself. You also don’t technically need any money at all, the point of a basic short is to profit off selling borrowed stock after all.
You definitely do need money. No broker is going to let you short without collateral, and you’re going to be paying interest for the duration of your short position beside any fees/commission.
That’s fair, but let’s not pretend there aren’t brokers that just run a credit check below a certain amount and leave it at that.
Maybe if you have a super low cap, high fees, and they automatically close your position at a pretty conservative point. But that’d hardly be worth any broker’s time with that risk/reward, unless they are hosing the borrower with insane fees. Though if that’s the case, putting up collateral would be cheaper (even if you have to borrow it from somewhere).
Fair point. I, perhaps wrongly, assumed OP wanted to short them out of existence and not just profit from shorting them.
How do you short a privately-held company?
Still far too low for a company that will likely accomplish AGI in the next two years.
I guess in two years I’ll eat my words. Not really.
LLMs are unable to create AGI. It’s mathematically impossible.
Will that be before or after the metaverse arrives?
Unless there is a drastic breakthrough in computing from a hardware company in or energy sources from likely a nuclear company, this is not possible.
For comparison, human brains only use 25W. We’re pretty damn good.
Even then. LLMs are fundamentally unable to create AGI; we’d need another paradigm shift in the field.
DM me, I got a few bridges to sell you.
Why not just withdraw that money in banknotes, and burn it in a stove?
Hey you know as well as I do that that money belongs in the money hole. 🕳️
Money hole https://youtu.be/JnX-D4kkPOQ
Burst. Buuuurst. Buuuuuurat
Oh good! I remember when they said they couldn’t afford to pay independent copyright owners. Now they can pay for the work they stole!
This is exactly what will happen.
/s
…said the lawyer getting ready to fill out a class action lawsuit.
I hope this is the case, but I can’t see the creators getting paid more than a small fraction of the value of their work even so.
I can’t wait for this current “A.I.” craze to go away. The tech is doofy, useless, wasteful, and a massive energy consumer. This is blockchain nonsense all over again, though that still hasn’t fully died yet, unfortunately.
Like blockchain there is some niche usefulness to the technology, but also like blockchain it’s being applied to a myriad of things it is not useful for.
deleted by creator
Drugs(silk road), scams&malware(pay 5 Bitcoin to unlock PC), money laundering&pump dump (unregulated market), and Nvidia hype (should have bought amd at 5$)
“we ran out of useful things to do with computing at the consumer level and now we are inventing problems” - “just bill’em” gates, 1984.
Also it’s not fucking ai is it. I actually find the blatant misuse of this term incredibly annoying to be honest.
It is, machine learning, neural networks and all the other parts in LLMs and generative algorithms like midjourney etc are all fields of artificial intelligence. The AI Effect just means the goalposts for what people think of as “proper” AI are constantly moving.
This might be the case ‘in the industry’, but I would argue quite strongly that it represents a gross misuse of the word ‘intelligence’. Like a fun new definition of the word, that doesn’t mean anything close to what it usually means.
Words often have multiple meanings in different contexts. “Intelligence” is one of those words.
Another meaning of “Intelligence” is “the collection of information of military or political value.” Would you go up to CIA headquarters and try to argue with them that “the collection of information of military or political value” lacks understanding, and therefore they’re using the wrong word and should take the “I” out of their name?
AI was a computer science term before any industry adopted it.
The colloquial use of “AI” is basically the Hollywood concept of a conscious computer. Nobody knows about AI as it’s used in computer science industry. Nor does it matter in regular discourse. In this sense it’s not AI. It’s a disservice to lead the on laypeople to believe it’s something it’s not.
The term AI was coined in 1956 at a computer science conference and was used to refer to a broad range of topics that certainly would include machine learning and neural networks as used in large language models.
I don’t get the “it’s not really AI” point that keeps being brought up in discussions like this. Are you thinking of AGI, perhaps? That’s the sci-fi “artificial person” variety, which LLMs aren’t able to manage. But that’s just a subset of AI.
‘Intelligence’ requires understanding. The machine has no understanding, because it is not conscious. You can fiddle around with the definitions of these words until you’re blue in the face but this will be true in rain, sun, hail, puffed wheat, etc.
Did you check the link I posted? The term “Artificial Intelligence” is literally used for the sorts of topics in computer science that LLMs fall under, and has been for almost 70 years now.
You are the one who is insisting that the meaning of the words should now be changed to something else.
Yes, no one seems to raise this anymore. AI to me has always been something akin to computer sentience.
Things like ‘self healing’ systems are being badeged as AI when they’re little more than an application load balancer.
Arguably you are the one misusing the term. Even painfully mundane tasks like the A* pathfinding algorithm fall under the umbrella of artificial intelligence. It’s a big, big (like, stupidly big) field.
You are right that it’s not AGI, but very few people (outside of marketing) claim that it is.
I’m going to argue quite strongly that my general, all purpose understanding of the words ‘artificial’ and ‘intelligence’ constitute the ‘correct’ definition for the term, and I don’t really care how ‘ai’ is defined ‘in industry’. It’s not intelligent, therefore it’s not artificial intelligence. You can redefine ‘intelligent’ in this context to mean whatever you like, but unless the general definition of the word changes then it doesn’t mean jack about shit.
So what is intelligence in your general, all-purpose understanding?
Are newborns intelligent? How about dogs? Ants?
You may argue that current AI is still behind an average human adult and therefore not intelligent, but academia is a bit more nuanced.
It has its uses, but it is being massively overhyped.
Having trialled Copilot and a few other AI tools in my workplace, I can confidently says it’s a minor productivity booster.
Whereas I have been finding uses for it to produce things that simply could not have produced myself without it, making it far more than a mere “productivity boost.”
I think people are mainly seeing what they want to see.
There some skill to using it a tool, just like with any other tools.
Yes, it enables you to create something like an image without any train in and quickly.
apparently so far the research disagrees with the productivity claims https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html
The total market cap across all cryptocurrencies is currently about 2.5 trillion dollars, which isn’t far below its all-time high of 3 trillion. If that’s something you’d say “hasn’t fully died yet” then AI’s not going to go away any time soon by that standard.
I didn’t specify cryptocurrencies. They were not the only “good” attached to blockchain hype. Besides, they are primarily money laundering schemes and also used to steal from the financially illiterate. Touting market caps doesn’t change the actual real-world use case.
This work will have lots of applications in the future. I personally stay as far away from it as I can because I just have zero need for it to write souless birthday card messages for me but to act like the work is doing nothing is kinda stupid.
Every stage it’s been at people would say “oh this can’t even do X” and then it could and they’d so “oh it can’t do Y” and then it could and they’d say…do I really need to go on?
The biggest issue with it all right, for me anyway, now is that we’re trying to use it for the absolute dumbest shit imaginable and investors are throwing tonnes of money, that could solve real problems we don’t need AI for, into the grinder while poverty and climate change run rampant around us.
Sounds like a bubble.
God I hope they crash and burn
deleted by creator
I say we indict Sam Altman for both securities fraud and 8 billion counts of reckless endangerment. Him and other AI boosters are running around shouting that AGI is just around the corner, OpenAI is creating it, and that there is a very good chance we won’t be able to control it and that it will kill us all. Well, the way I see it, there are only two possibilities:
-
He’s right. In which case, OpenAI is literally endangering all of humanity by its very operation. In that case, the logical thing to do would be for the rest of us to arrest everyone at OpenAI, shove them in deep hole and never let them see the light of day again, and burn all their research and work to ashes. When someone says, “superintelligent AI cannot be stopped!” I say, “you sure about that? Because it’s humans that are making it. And humans aren’t bullet-proof.”
-
He’s lying. This is much more likely. In that case, he is guilty of fraud. He’s falsely making claims his company has no ability to achieve, and he is taking in billions in investor money based on these lies.
He’s either a conman, or a man so dangerous he should literally be thrown in the darkest hole we can find for the rest of his life.
And no, I REALLY don’t buy the argument that if the tech allows it, that superintelligent AI is just some inevitable thing we can’t choose to stop. The proposed methods to create it all rely on giant data centers that consume gigawatts of energy to run. You’re not hiding that kind of infrastructure. If it turns out superintelligence really is possible, we pass a global treaty to ban it, and simply shoot anyone that attempts to create it. I’m sorry, but if you legitimately are threatening the survival of the entire species, I have zero qualms about putting you in the ground. We don’t let people build nuclear reactors in their basement. And if this tech really is that capable and that dangerous, it should be regulated as strongly as nuclear weapons. If OpenAI really is trying to build a super-AGI, they should be treated no differently than a terrorist group attempting to build their own nuclear weapon.
But anyway, I say we just indict him on both charges. Charge Sam Altman with both securities fraud and 8 billion counts of reckless endangerment. Let the courts figure out which one he is guilty of, because it’s definitely one or the other.
-
The valuation is based on the last buy in… I am guessing they had somebody buy in recently. Otherwise this is just straight up fake news.
Not OpenAI, now they will be ClosedAI :
… complete its planned conversion from a nonprofit (with a for-profit division) to a fully for-profit company.
I listened to The Foundering on Sam Altman at the same time as listening to The Power Broker and they weirdly synced up.
- They only have one solution for everything, believing more of that thing will solve everything (“Just one more scrape, just one more scrape of everything that’s been said or published anywhere and our next model will be perfect.”)
- They don’t care for the destruction left in their wake
- They will walk over everyone, including their own family to remain part of the conversation
- The only difference is Robert Moses was constrained to New York
You are describing creative destruction
Yeah yeah…all that destruction from checks notes looking at every meme and listening to every song on the internet.
Such destruction. Much chaos! 🥱🥱🥱🙄🙄🙄
i hope musk buys it and loses another $100 billion
Great read on the topic: OpenAI Is A Bad Business by Ed Zitron (Where’s Your Ed At)
Ed Zitron is amazing. That dude just says it like it is.
Can’t wait for the AI boom to inevitably pop and all those billions that could have been spent on…literally anything else, go down the drain.
When I copy and paste someone else’s work, I get called a plagiarist and get fired.
When OpenAI creates a robot that does it really really really fast, they make enough money to feed the planet hundreds of times over.
I don’t want to live on this planet any more.
You need to do more fancy thingy between the copy and the pasting.
Good, intellectual property is a mental illness of capitalism
Sure, but I think recognizing someone when they accomplish something of value is important regardless of the economic system in place.
And in this Capitalistic society OpenAI is doing nothing but literally capitalizing on the hard work of thousands of individuals without giving them any form of recognition
Capitalism is incompatible with meritocracy.
It rewards ruthless capture and enclosure of other people’s hard work and surplus value.
Read what Disney did to folk culture. Read what Edison did to his underlings, it is all a repeat of the “enclosures”, the thefts of our commons for private profits.
If you don’t know what the enclosures were, read this macabre story and focus on the people who attacked the fences and paid with their lives https://en.wikipedia.org/wiki/Elizabeth_Sugrue