Microsoft has released the code to MS-DOS 4.00 on GitHub; Dave takes you a tour of the code, builds it, and runs it on original hardware. For my book on lif...
Wonder what the reason was for so much being in raw assembly when C existed. A basic library/API would be one of the first things I’d tackle in an OS. Move on to a higher level as soon as you’re able.
Because Ryan wrote it like this 10 years ago and nobody bothered to rewrite it in C.
Back then, I’d guess most developers were relatively fluent in assembly, so if there’s only a small change to make, they’d just change the assembly and move on.
Lack of trust: what was it doing behind the scenes? What’s if it just went and … allocated memory all by itself!!
Optimization wasn’t so good back then. People believed that they could write better assembly. For speed and size.
Memory was tight. C would include big libraries even if only one function was needed. If “hello world” was several k in size, that added to the suspicion (even though that was a fixed overhead in practice).
Compilers were much less complex back then and didn’t do a great deal of optimisation. Also hardware was slow, so your compiled code, which wasn’t necessarily optimal either before or after the compilation phase, was at least half as fast as you wanted it to be.
C compilers (at least on personal computers) weren’t great at optimization back then and every kilobyte mattered - the user only got 640 of them, going beyond that required jumping through hoops.
Similar for MHz, hand optimization was important for performance since there was so little CPU time to go around.
Wonder what the reason was for so much being in raw assembly when C existed. A basic library/API would be one of the first things I’d tackle in an OS. Move on to a higher level as soon as you’re able.
Because Ryan wrote it like this 10 years ago and nobody bothered to rewrite it in C.
Back then, I’d guess most developers were relatively fluent in assembly, so if there’s only a small change to make, they’d just change the assembly and move on.
Everyone knew assembler back then. I did and I’m no developer today.
Not everyone knew C.
Lack of trust: what was it doing behind the scenes? What’s if it just went and … allocated memory all by itself!!
Optimization wasn’t so good back then. People believed that they could write better assembly. For speed and size.
Memory was tight. C would include big libraries even if only one function was needed. If “hello world” was several k in size, that added to the suspicion (even though that was a fixed overhead in practice).
Now we know we can’t write better C.
Though my teacher in tech school a few years ago ran an entire OS, where everything is written in assembler. What was it?
Maybe Kolibri OS?
Its an amazing project, booting from a single floppy disk into a full graphical OS with multiple tools. And that on PCs with almost no RAM.
I sometimes use it to backup ancient PCs.
Yeah, sounds like this, thanks!
Makes me wonder why the suckless guys don’t hook in there.
Compilers were much less complex back then and didn’t do a great deal of optimisation. Also hardware was slow, so your compiled code, which wasn’t necessarily optimal either before or after the compilation phase, was at least half as fast as you wanted it to be.
If you wanted speed, you hand-rolled assembly.
C compilers (at least on personal computers) weren’t great at optimization back then and every kilobyte mattered - the user only got 640 of them, going beyond that required jumping through hoops.
Similar for MHz, hand optimization was important for performance since there was so little CPU time to go around.
And also legacy… If something is already written in assembly and you want to add a feature, you’re not going to completely rewrite it.