

The episode is “My Screw Up” (S3E14) if anyone is wondering.
I might actually prefer “My Lunch” from S5 as an episode, but they are both fantastic.
The episode is “My Screw Up” (S3E14) if anyone is wondering.
I might actually prefer “My Lunch” from S5 as an episode, but they are both fantastic.
Well, yes and no.
Quantum computers will likely never beat classical computing on classical algorithms, for exactly the reasons you stated, classical just has too much of a head start.
But there are certain problems with quantum algorithms that are exponentially faster than the classical algorithms. Quantum computers will be better on those problems very quickly, but we are still working on building reliable QCs. Also, we currently don’t know very many quantum algorithms with that degree of speedup, so as others have said there isn’t many use cases for QCs yet.
This isn’t a “comic book” universe, but the parahumans story universe (Worm and Ward) fits this pretty well.
Without spoiling too much of the story, characters all get powers in response to traumatic events. The powers they get also tend to reflect the type of trauma that occurred, so if they lost an arm they might get a healing power, or if they were trapped in a burning building they might get the ability to phase through walls and a resistance to fire. All of the powers in the setting tend to follow this approach, and stay within the rules of the setting.
What is this garbage? If I own a house/gold/collectable/toilet paper during covid/… and the value goes up, am I supposed to pay taxes?
Yes, you are supposed to pay taxes on that (or on the house specifically). It’s called property taxes.
If the value goes up, you pay more taxes the next year, if the value goes down you pay less.
I’m not sure the ownership situation of the company, but it is also independently in bankruptcy so I think that is being dealt with later
After a few years the orbit will degrade enough that it’ll start to fall back to earth. At that point, the satellite will either burn up completely on re-entry, or partially and the rest will fall to earth.
Either way, each of these satellites will be completely gone from orbit after a few years.
ULA is already a private company. I don’t think the US government has done any of their own work to get to space since the shuttle.
I believe that is correct.
In the book, they also took pains to point out the steps he took to try to avoid it happening to the other airlocks after that point too - by actually balancing out their usage a bit more, instead of just always using the same one.
But intelligence is the capacity to solve problems. If you can solve problems quickly, you are by definition intelligent
To solve any problems? Because when I run a computer simulation from a random initial state, that’s technically the computer solving a problem it’s never seen before, and it is trillions of times faster than me. Does that mean the computer is trillions of times more intelligent than me?
the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
If we built a true super-genius AI but never let it leave a small container, is it not intelligent because WE never let it manipulate its environment? And regarding the tests in the Merriam Webster definition, I suspect it’s talking about “IQ tests”, which in practice are known to be at least partially not objective. Just as an example, it’s known that you can study for and improve your score on an IQ test. How does studying for a test increase your “ability to apply knowledge”? I can think of some potential pathways, but we’re basically back to it not being clearly defined.
In essence, what I’m trying to say is that even though we can write down some definition for “intelligence”, it’s still not a concept that even humans have a fantastic understanding of, even for other humans. When we try to think of types of non-human intelligence, our current models for intelligence fall apart even more. Not that I think current LLMs are actually “intelligent” by however you would define the term.
If you’re mixing things up in the kitchen, typically you try to be somewhat precise with ratios.
The difference in this case being that because the actual ratio of the blend is unknown, you don’t actually know how it would crystallize. Technically they could even change up the ratio week to week based on the price of high-fructose corn syrup so you wouldn’t even get consistency from it.
If this actually did lead to faster matrix multiplication, then essentially anything that can be done on a GPU would benefit. That definitely could include games, and physics models, along with a bunch of other applications (and yes, also AI stuff).
I’m sure the papers authors know all of that, but somehow along the line the article just became"faster and better AI"
The above post is referencing/quoting a line from the show “It’s always sunny in Philadelphia”, which is why people up voting it
Good point! Obviously the solution here is to stop funding the science!
(/s)