Gee its like all modern computers already have massively parallel processing devices built in.
🚨 ⚠ 🚨 Hoax alert! 🚨 ⚠ 🚨
You can download more ram too!
The techradar article is terrible, the techcrunch article is better, the Flow website has some detail.
But overall I have to say I don’t believe them. You can’t just make threads independent if they logically have dependencies. Or just remove cache coherency latency by removing caches.
Can’t have cache latency if there is no cache!
So THIS is what the communists were talking about when they told me about the benefits of transitioning to a cacheless society!
Hur Hur Hur… PPU
Startup discovers what a northbridge is
10 tricks to speed up your cpu and trim belly fat. Electrical engineers hate them! Invest now! Start up is called ‘DefinitelyNotAScam’.
I don’t care. Intel promised 5nm 10ghz single core processors by this point and I still want it out of principle
Cybercriminals are creaming their jorts at the potential exploits this might open up.
Please, hackers wear cargo shorts and toe shoes sir
Oof. But yeah. Fair.
I want to go in record that sometimes I just wear sandals with socks.
Truly! The scum of the earth!
This change is likened to expanding a CPU from a one-lane road to a multi-lane highway
This analogy just pegged the bullshit meter so hard I almost died of eyeroll.
Apparently the percentage of people actually understanding what they are doing in the management part of the industry is now too low to filter out even such bullshit.
You’ve got to be careful with rolling your eyes, because the parallelism of the two eyes means that the eye roll can be twice as powerful ^1
(1) If measured against the silly baseline of a single eyeroll
Why is this bullshit upvoted?
Already the first sentence, they change from the headline “without recoding” to “with further optimization”.
Then the explanation “a companion chip that optimizes processing tasks in real-time”
This is already done at compiler level and internally in any modern CPU for more than a decade.It might be possible to some degree for some specific forms of code, like maybe Java. But generally for the CPU this is bullshit, and the headline is decidedly dishonest.
Has anyone been able to find an actual description of what this does? I clicked two layers deep and neither explains the details. It does sound like they’re doing CPU scheduling in the hardware, which is cool and makes some sense, but the descriptions are too vague to explain what the hell this is except “more parallelism goes brrrr” and it’s not clear to me why current GPUs aren’t already that.
Overclockers:
“Give me some liquid nitrogen and I’ll make that 102x.”Meh, I just spit on it.
I highly doubt that unless they invented magic.
Added it
I meant tech radar but thanks
Hmm, so sounds like they’re moving the kernel scheduler down to a hardware layer? Basically just better smp?
Processors have an execution pipeline, so a single command like mov has some number of actions the CPU takes to execute it. CPU designers already have some magic that allows them to execute these out of order as well as other stuff like pre calculating what they think the next command will probably be.
It’s been a decade since my cpu class so I am butchering that explanation, but I think that is what they are proposing messing with
That’s accurate.
Its done through multiple algorithms, but the general idea is to schedule calculations as soon as possible, accounting for data hazards to make sure everything is still equivalent to non out of order execution.
There’s also branch prediction which is the same thing kind of except the CPU needs a way to ensure if the prediction was actually correct.