I have been running Windows 10+11 on arm for years now, the next version of Windows Server 2025 already has an arm preview release. Windows ARM has for a long time had x86 emulation, and has supported x64 emulation since about the start of COVID.
They convert the x86 code into native ARM code, then execute it. Recompiling the software takes a moment, and some CPU instructions don’t have a good equivalent, but for the most part it works very well.
MacOS does use the term translations for its Rosetta Layer while Windows Arm uses the term emulation. I do believe the technical difference is that MacOS converts x64 code to arm64 on the fly, while part of the reason for emulation on Windows is to support x86 and other architectures. Someone more knowledgeable than me may be able to better compare the two offerings.
macOS converts x86 code to ARM ahead of launching an app, and then caches the translation. It adds a tiny delay to the first time you launch an x86 app on ARM. It also does on-the-fly translation if needed, for applications that do code generation at runtime (such as scripting languages with JIT compilers).
The biggest difference is that Apple has added support for an x86-like strong memory model to their ARM chips. ARM has a weak memory model. Translating code written for a strong memory model to run on a CPU with a weak memory model absolutely kills performance (see my other comment above for details).
That’s one thing macOS does well: legacy support— at least for x64.
for now…
I have been running Windows 10+11 on arm for years now, the next version of Windows Server 2025 already has an arm preview release. Windows ARM has for a long time had x86 emulation, and has supported x64 emulation since about the start of COVID.
Is it actually emulation? Macs don’t do that.
They convert the x86 code into native ARM code, then execute it. Recompiling the software takes a moment, and some CPU instructions don’t have a good equivalent, but for the most part it works very well.
MacOS does use the term translations for its Rosetta Layer while Windows Arm uses the term emulation. I do believe the technical difference is that MacOS converts x64 code to arm64 on the fly, while part of the reason for emulation on Windows is to support x86 and other architectures. Someone more knowledgeable than me may be able to better compare the two offerings.
macOS converts x86 code to ARM ahead of launching an app, and then caches the translation. It adds a tiny delay to the first time you launch an x86 app on ARM. It also does on-the-fly translation if needed, for applications that do code generation at runtime (such as scripting languages with JIT compilers).
The biggest difference is that Apple has added support for an x86-like strong memory model to their ARM chips. ARM has a weak memory model. Translating code written for a strong memory model to run on a CPU with a weak memory model absolutely kills performance (see my other comment above for details).
They did a good job when moving from os9-osx. Adobe took a looong time to move to osx