Interesting. Samsung making a bold move here, but one that could make sense.
I think these ARM chips are more expensive than we realize! Apple’s egregiously high upgrade pricing on MacBooks sucks, and 8gb of RAM by default on the base model sucks as well, but it is likely to raise the average sale price of devices equipped with their chips. This has been known for some time, I feel.
I’ll cut Samsung some slack since we don’t know the unit cost of the Snapdragon chips, and they aren’t likely to sell out of these devices right away even with competitive pricing because of the state of Windows on ARM. I’m excited to see how Linux support pans out on the next generation of non-Apple ARM notebooks, though; I think this is a chance for some manufacturers to take Linux more seriously, as Linux on ARM is actually not a terrible experience.
Or it is just corporate greed. Samsung would love to position something that is just okay into a premium price tier and not have to pay Intel. Sure they’re going to pay Qualcomm instead but you can bet that Qualcomm is giving some great introductory prices to their early partners.
For example Apple uses HBM instead of DDR5. They also give the CPUs heaps of L1/L2/L3 cache to avoid memory access as much as possible. And some of the stuff they do with flash memory is just as expensive.
That’s the real reason Apple Silicon Macs cost so much and I’m more than willing to pay that price. But it’s also the reason those Macs are so fast.
How does Qualcomm compare? I have no idea.
Unless something changed, I believe Apple is using LPDDR5 since the M2. https://www.tomshardware.com/news/apple-introduces-m2-processor-8-core-cpu-10-core-gpu-up-to-18-more-performance
Samsung uses their competitors chips? Kinda weird to see
Not sure what you mean, they’ve always used Snapdragons? The S23 from 2023 uses one, and the S3 from 2012 uses them in some models, and most galaxies between those do as well.
What, how is Qualcomm competing with Samsung?
Apple uses Samsung hardware, btw.
Samsung make exynos chips which are arm. But Samsung even uses qualcomm in their phones in other regions so it’s not unusual
Exynos are subpar to Qualcomm arm chips, or at least they were not so far ago.
They still are
To surprise of no-one :)
I know Windows does ARM to x64 translation decently, but does the chip also feature special hardware functionality to aid this, like the M chips (TSO for example)?
So this SOC benchmarks on par with AMD’s best integrated GPU? On par with the M3, but not the M3 Pro/Max. If I’m going to switch to Windows, I’m not going to buy a less powerful PC that’s less capable than an AMD integrated GPU lmao. Call me when these are on par with 4080/4090 lol.
I won’t write them off before I’ve owned one, I imagine they could be good for things like battery life but I’m not sure if they’d be an improvement over other chips like ryzen apus.
Will be curious to see the advantage and disadvantages.
ARM is great on Linux where almost everything has an ARM version and apple can simply mandate that everyone supports it, but where are you going to find windows programs compiled for ARM?
Any program written for the .net clr ought to just run out of the box. There’s also an x64 to ARM translation layer that works much like Apple’s Rosetta. It will run the binary through a translation and execute that. I have one of the windows arm dev units. It works relatively well except on some games from my limited experience.
Any program written for the .net clr ought to just run out of the box.
Both of them?
There’s also an x64 to ARM translation layer that works much like Apple’s Rosetta.
Except for the performance bit.
ARM processors use a weak memory model, whereas x86 use a strong memory model. Meaning that x86 guarantees actual order of writes to memory is the same as the order in which those writes executes, while ARM is allowed to re-order them.
Usually it doesn’t matter in which data is written to RAM, and allowing for re-ordering of writes can boost performance. When it does matter, a developer can insert a so-called memory barrier, this ensures all writes before the barrier are finished before the code continues.
However, since this is not necessary on x86 as all writes are ordered x86 code does not include these memory barrier instructions at the spots where write order actually matters. So when translating x86 code to ARM code, you have to assume write order always matters because you can’t tell the difference. This means inserting memory barriers after every write in the translated code. This absolutely kills performance.
Apple includes a special mode in their ARM chips, only used by Rosetta, that enables an x86-like strong memory model. This means Rosetta can translate x86 to ARM without inserting those performance-killing memory barriers. Unless Qualcomm added a similar mode (and AFAIK they did not) and Microsoft added support for it in their emulator, performance of translated x86 code is going to be nothing like that of Rosetta.
The biggest advantage Apple has is they’ve been breaking legacy compatibility every couple years, training devs to write more portable code and setting a consumer expectation of change. I can’t imagine how the emulator will cope with 32bit software written for the Pentium II.
The only reason Windows is still relevant is a massive volume of legacy x86 applications.
If that laptop won’t support x86 emulation, it’d be actually worse that Linux ARM laptop.
That’s one thing macOS does well: legacy support— at least for x64.
for now…
They did a good job when moving from os9-osx. Adobe took a looong time to move to osx
I have been running Windows 10+11 on arm for years now, the next version of Windows Server 2025 already has an arm preview release. Windows ARM has for a long time had x86 emulation, and has supported x64 emulation since about the start of COVID.
Is it actually emulation? Macs don’t do that.
They convert the x86 code into native ARM code, then execute it. Recompiling the software takes a moment, and some CPU instructions don’t have a good equivalent, but for the most part it works very well.
MacOS does use the term translations for its Rosetta Layer while Windows Arm uses the term emulation. I do believe the technical difference is that MacOS converts x64 code to arm64 on the fly, while part of the reason for emulation on Windows is to support x86 and other architectures. Someone more knowledgeable than me may be able to better compare the two offerings.
macOS converts x86 code to ARM ahead of launching an app, and then caches the translation. It adds a tiny delay to the first time you launch an x86 app on ARM. It also does on-the-fly translation if needed, for applications that do code generation at runtime (such as scripting languages with JIT compilers).
The biggest difference is that Apple has added support for an x86-like strong memory model to their ARM chips. ARM has a weak memory model. Translating code written for a strong memory model to run on a CPU with a weak memory model absolutely kills performance (see my other comment above for details).
Windows is relevant because it’s a better product for the average user. The same goes for OSX. ARM isn’t going to change any of that. Especially with NVIDIA GPUs being broken and a pain in the ass.
Windows is not a ‘better’ product, that would be ChromeOS. Zero configuration means nothing can get broken.
The average user who started with MS Office 95 is now 50 years old. The younger average user at least knows there are alternatives to Windows.
PC gaming is a whole other can of worms. I keep hearing that Valve did some black mahic and now most of Steam games work on Linux with no issues.
I’ve been gaming on linux for about two years now through steam proton and it’s really good. Some games don’t run because of anti cheat, some games run even better than on windows.
Gaming on Linux has come a long way and I always prefer to run it on Linux rather than a dedicated Windows boot, if possible.
But if you rely on VRR, DLSS and have a decent HDR display, Linux unfortunately still isn’t quite there yet. VRR/HDR is mostly unsupported systemwide currently. DLSS sometimes works, sometimes requires a lot of debugging and ends up actually hurting the performance.
If your hardware setup allows you to run your games at a decent framerate without DLSS/VRR, this likely won’t be an issue for you.
Google Docs is the only meaningful competitor to Office. No one I know wants to try Linux desktop and I think it’s hard to convince anyone to give up the convenience of Windows. Proton works but in my experience requires too much experimentation for the average user.
MS has been working on ARM for years. To think otherwise is naive.
Sure - but apple has been “working on” ARM since 1981. Microsoft is definitely on the back foot here.
Oh yeah but has anyone else?
Linux has been a fully working on arm for much longer than Windows so there’s that
That’s not what I meant. Microsoft has been working on Windows ARM, sure, but has anyone else been working on Windows ARM? As far as I know you can’t even get Firefox on ARM.
I suppose that they have a compatibility layer, but it’s nowhere near the performance of Rosetta 2.
Ah well, Firefox definitely has a Windows arm native build available on their website but yeah most applications certainly won’t
Qualcomm has a pretty fast emulator for the growing pains. Microscope offers arm versions for most of their software
But many open source projects could.be cross compiled it wouldn’t be long if these things start selling.
Qualcomm has a pretty fast emulator for the growing pains.
With how much, 10% or 20% performance loss? Better buy x86 then.
Disagree, I run a MacBook m1 and enjoy it mostly because everything is compiled for arm. The very few software running through Rosetta are slow to launch, drain battery and less performant. If you were to run x64 on arm it just kill the interest of arm: battery becomes just as bad as on x64, performance is worst.
Disagree with your disagreement. I also have an M1 and was a quite early adopter (within 3 months of launch). It was really snappy compared to my Intel Air it replaced. From the get-go. Even for apps that were still x86 code.
Things definitely improved over the next 9 months, but I was and am a really happy camper.
Intel Air doesn’t count. Those were dogshit processors
Well decent processors, just laughably bad cooling design
I don’t know what these chips are like, but x86 software runs perfectly on my ARM Mac. And not just small apps either, I’m running full x86 Linux servers in virtual machines.
There is a measurable performance penalty but it’s not a noticeable one with any of the software I use… ultimately it just means the CPU sits on 0.2 load instead of 0.1 load and if it spikes to 100% then it’s only for a split second.
I recently bought an M1 Max and I definitely regret migrating data from my Intel MacBook. I’ve had to reinstall nearly all the apps anyway. Less compatible than I was expecting. Overall happy with it.
Samsung: We make fun of Apple until we copy them outright.
See also: removing ports, having a notch
Apple was like the third phone with a notch. That’s Essential’s claim actually.
And Motorola had true wireless earbuds earlier, etc.
Apple is about polish, not novelty, but a ton of people are obsessed with the idea of Apple as being “groundbreakers” everywhere.
Let apple take the flak for moving the market and then quietly copy because of course it’s more lucrative… classic.
This device has a notch?
Also when did Samsung attack apple for using M1/M2/M3?
meanwhile the long gone RISC hype train
Do you mean RISC V
Yeah i was wainting for that ? Did they ever had any plan to do so ?
ARM is RISC (or at least a version of it).