I’m curious how software can be created and evolve over time. I’m afraid that at some point, we’ll realize there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.
Are there any instances of this happening? Where something is designed with a flaw that doesn’t get realized until much later, necessitating scrapping the whole thing and starting from scratch?
GUI toolkits like Qt and Gtk. I can’t tell you how to do it better, but something is definitely wrong with the standard class hierarchy framework model these things adhere to. Someday someone will figure out a better way to write GUIs (or maybe that already exists and I’m unaware) and that new approach will take over eventually, and all the GUI toolkits will have to be scrapped or rewritten completely.
Newer toolkits all seem to be going immediate mode. Which I kind of hate as an idea personally.
er, do you have an example. This is not a trend I was aware of
I’ve really fallen in love with the Iced framework lately. It just clicks.
A modified version of it is what System76 is using for the new COSMIC DE
Desktop apps nowadays are mostly written in HTML with Electron anyway.
Which - in my considered opinion - makes them so much worse.
Is it because writing native UI on all current systems I’m aware of is still worse than in the times of NeXTStep with Interface Builder, Objective C, and their class libraries?
And/or is it because it allows (perceived) lower-cost “web developers” to be tasked with “native” client UI?
Probably mainly a matter of saving costs, you get a web interface and a standalone app from one codebase.
and a mobile app sometimes
and all the GUI toolkits will have to be scrapped or rewritten completely
Dillo is the only tool i know still using FLTK.
NUKE
https://www.foundry.com/products/nuke-family/nuke
Also, while a bit of a surprise, FLTK is migrating to Wayland.
Idk man, I’ve used a lot of UI toolkits, and I don’t really see anything wrong with GTK (though they do basically rewrite it from scratch every few years it seems…)
The only thing that comes to mind is the React-ish world of UI systems, where model-view-controller patterns are more obvious to use. I.e. a concept of state where the UI automatically re-renders based on the data backing it
But generally, GTK is a joy, and imo the world of HTML has long been trying to catch up to it. It’s only kinda recently that we got flexbox, and that was always how GTK layouts were. The tooling, design guidelines, and visual editors have been great for a long time
The gatekeeping community
Can I keep a gate too and join the community?
Omg nobody has mentioned FHS?!
What that?
Amended my post 😉
We haven’t rewritten the firewall code lately, right? checks Oh, it looks like we have. Now it’s nftables.
I learned ipfirewall, then ipchains, then iptables came along, and I was like, oh hell no, not again. At that point I found software to set up the firewall for me.
I was just thinking that iptables lasted a good 20 years. Over twice that of ipchains. Was it good enough or did it just have too much inertia?
Nf is probably a welcome improvement in any case.
UFW → nftables/iptables. Never worry about chains again
Damn, you’re old. iptables came out in 1998. That’s what I learned in (and I still don’t fully understand it).
Alsa > Pulseaudio > Pipewire
About 20 xdg-open alternatives (which is, btw, just a wrapper around gnome-open, exo-open, etc.)
My session scripts after a deep dive: full rewrite, structured to be Xorg/waylamd/terminal independent, uses sx instead startx and custom startxfce4 as base. Seriously, startxfce4 has workarounds from the 80ies and software rot affected formatting already.
Turnstile instead elogind (which is bound to systemd releases)
mingetty, because who uses a modem nowadays?
Pulseaudio doesn’t replace ALSA. Pulseaudio replaces esd and aRts
those last two are just made up words
All words are made up
Linux could use a rewrite of all things related to audio from kernel to x / Wayland based audio apps.
About 20 xdg-open alternatives (which is, btw, just a wrapper around gnome-open, exo-open, etc.)
I use handlr-regex, is it bad? It was the only thing I found that I could use to open certain links on certain web applications (like android does), using exo-open all links just opened on the web browser instead.
ALSA is based
Be careful what you wish for. I’ve been part of some rewrites that turned out worse than the original in every way. Not even code quality was improved.
In corporations, we call that job security.
Just rewriting the same thing in different ways for little gain except to say we did it
Funnily enough the current one is actually the one where we’ve made the biggest delta and it’s been worthwhile in every way. When I joined the oldest part of the platform was 90s .net and MSSQL. This summer we’re turning the last bits off.
Libxz
One might exist already: lzlib.
I admit I haven’t done a great deal of research, so maybe there are problems, but I’ve found that
lzip
tends to do better at compression thanxz
/lzma
and, to paraphrase its manual, it’s designed to be a drop-in replacement forgzip
andbzip2
. It’s been around since at least 2009 according to the copyright messages.That said,
xz
is going to receive a lot of scrutiny from now on, so maybe it doesn’t need replacing. Likewise, anything else that allows random binary blobs into the source repository is going to have the same sort of scrutiny. Is that data really random? Can it be generated by non-obfuscated plain text source code instead? etc. etc.Personally I quite like
zstd
, I find it has a pretty decent balance of speed to ratio at each of its levels.
Not too relevant for desktop users but NFS.
No way people are actually setting it up with Kerberos Auth
100% this
We need a networked file system with real authentication and network encryption that’s trivial to set up and that is performant and that preserves unix-ness of the filesystem, meaning nothing weird like smb, so you can just use it as you would a local filesystem.
The OpenSSH of network filesystems basically.
So sshfs or sftp?
Performance of those is atrocious.
dmesg
/jk
There are many instances like that. Systemd vs system V in it, x vs Wayland, ed vs vim, Tex vs latex vs lyx vs context, OpenOffice vs liber office.
Usually someone identifies a problem or a new way of doing things… then a lot of people adapt and some people don’t. Sometimes the new improvement is worse, sometimes it inspires a revival of the old system for the better…
It’s almost never catastrophic for anyone involved.
Some of those are not rewrites but extensions/forks
I’d say only open/libreoffice fits that.
Edit: maybe Tex/latex/lyx too, but context is not.
LaTeX and ConTeXt are both macros for TeX. LyX is a graphical editor which outputs LaTeX.
Yes… I’d classify context as a reboot of latex.
In reality this happens all the time. When you develop a codebase it’s based on your understanding of the problem. Over time you gain new insights into the environment in which that problem exists and you reach a point where you are bending over backwards to implement a fix when you decide to start again.
It’s tricky because if you start too early with the rewrite, you don’t have a full understanding, start too late and you don’t have enough arms and legs to satisfy the customers who are wanting bugs fixed in the current system while you are building the next one.
… or you hire a new person who knows everything and wants to rewrite it all in BASIC, or some other random language …
It’s actually a classic programmer move to start over again. I’ve read the book “Clean Code” and it talks about a little bit.
Appereantly it would not be the first time that the new start turns into the same mess as the old codebase it’s supposed to replace. While starting over can be tempting, refactoring is in my opinion better.
If you refactor a lot, you start thinking the same way about the new code you write. So any new code you write will probably be better and you’ll be cleaning up the old code too. If you know you have to clean up the mess anyways, better do it right the first time …
However it is not hard to imagine that some programming languages simply get too old and the application has to be rewritten in a new language to ensure continuity. So I think that happens sometimes.
Yeah, this was something I recognized about myself in the first few years out of school. My brain always wanted to say “all of this is a mess, let’s just delete it all and start from scratch” as though that was some kind of bold/smart move.
But I now understand that it’s the mark of a talented engineer to see where we are as point A, where we want to be as point B, and be able to navigate from A to B before some deadline (and maybe you have points/deadlines C, D, E, etc.). The person who has that vision is who you want in charge.
Chesterton’s Fence is the relevant analogy: “you should never destroy a fence until you understand why it’s there in the first place.”
I’d counter that with monolithic, legacy apps without any testing trying to refactor can be a real pain.
I much prefer starting from scratch, while trying to avoid past mistakes and still maintaining the old app until new up is ready. Then management starts managing and new app becomes old app. Rinse and repeat.
The difference between the idiot and the expert, is the expert knows why the fences are there, and can do the rewrite without having to relearn lessons. But if you’re supporting a package you didn’t originally write, a rewrite is much harder.
Which is something I always try to explain to juniors: writing code is cool, but for your sake learn how to READ code.
Not just understanding what it does, but what was it all meant to do. Even reading your own code is a skill that needs some focus.
Side note: I hate it to my core when people copy code mindlessly. Sometimes it’s not even a bug, or a performance issue, but something utterly stupid and much harder to read. But because they didn’t understand it, and didn’t even try, they just copy-pasted it and went on. Ugh.
“you should never destroy a fence until you understand why it’s there in the first place.”
I like that; really makes me think about my time in building-games.
I dont know if this even makes sense but damn if bluetooth/ audio could get to a point of “It just works”.
Bluetooth in general is just a mess and it’s sad that there’s no cross-platform sdk written in C for using it.
What’s your latest disfavor?
Mine is the priorisation of devices. If someone turns on the flatshare BT box and I’m listening to Death Metal over my headphones, suddenly everyone except me is listening to Death Metal.
Just being… crappy?
Not connecting automatically. Bad quality. Some glitchy artifacts. It gets horrible The only work around I’ve found is stupid but running
apt reinstall --purge bluez gnome-bluetooth
and it works fine. So annoying but I have to do this almost every day.Reinstalling should change nothing. If its getting corrupted check your drive and Ram.
I don’t know why this works, but if im having issues, i do this, and it fixes all of them across the board. Even just restarting the service is not as effective as this. That some times works, sometimes doesn’t.
I’m confident its not a drive or ram issue. Its a blue tooth issue/ audio. But I also can’t explain why it is so consistent.
That really sounds like shitty firmware at one end or the other
Have you checked the logs?
Not to mention bluez aggressive conne ts to devices. It would be nice if my laptop in the other room didn’t interrupt my phones connection to my earbuds.
Then again, we also have wired for a reason. Hate all you want but it works and is predicable
It’s been a while (few years actually) since I even tried, but bluetooth headsets just won’t play nicely. You either get the audio quality from a bottom of the barrel or somewhat decent quality without microphone. And the different protocol/whatever isn’t selected automatically, headset randomly disconnects and nothing really works like it does with my cellphone/windows-machines.
YMMV, but that’s been my experience with my headsets. I’ve understood that there’s some propietary stuff going on with audio codecs, but it’s just so frustrating.
It does for me. What issue are you having?
Linux does this all the time.
ALSA -> Pulse -> Pipewire
Xorg -> Wayland
GNOME 2 -> GNOME 3
Every window manager, compositor, and DE
GIMP 2 -> GIMP 3
SysV init -> SystemD
OpenSSL -> BoringSSL
Twenty different kinds of package manager
Many shifts in popular software
BoringSSL is not a drop-in replacement for openssl though:
BoringSSL is a fork of OpenSSL that is designed to meet Google’s needs.
Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don’t recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.
Starting anything from scratch is a huge risk these days. At best you’ll have something like the python 2 -> 3 rewrite (leaving scraps of legacy code all over the place), at worst you’ll have something like gnome/kde (where the community schisms rather than adopting a new standard). I would say that most of the time, there are only two ways to get a new standard to reach mass adoption.
-
Retrofit everything. Extend old APIs where possible. Build your new layer on top of https, or javascript, or ascii, or something else that already has widespread adoption. Make a clear upgrade path for old users, but maintain compatibility for as long as possible.
-
Buy 99% of the market and declare yourself king (cough cough chromium).
Python 3 wasn’t a rewrite, it just broke compatibility with Python 2.
In a good way. Using a non-verified bytes type for strings was such a giant source of bugs. Text is complicated and pretending it isn’t won’t get you far.
-