Today, the Wayland enthusiasts like to talk about how they are modernizing the Linux graphics stack. But Linux is a Unix, and in Unix, everything is meant to be a file. So any Wayland evangelists out there, tell us: where in the file system can I find the files describing a window on the screen under the Wayland protocol? What file holds the coordinates of the window, its place in the Z-order, its colour depth, its contents?
As far as I’m aware nobody has even considered extending the file metaphor to the graphics stack, and it sounds a bit ridiculous to me.
It also reminds me of this talk that suggests maybe trying to express everything as a file might not be the best idea…
Dennis Ritchie and Ken Thompson […] ignored what the industry was doing, went back to their original ideas, and kept working on refining them. The result is the next step in the development of Unix
Plan 9 is clearly what the article is talking about. Odd that they don’t name it.
It’s nonsense. The author arbitrarily decides on some expression of the windowing model in terms of files. OK cool. Every author of a system that uses files decides how to represent their data. E.g. how many files to use, sockets, what data to flow through each and what format that data should be represented in. Like why not go to the authors of Btrfs and argue why the data format /dev/btrfs-control is the way it is why it’s a single file instead of 5. It’s an arbitrary decision. When not used for storing data files in POSIX-like OSes are a type of IPC mechanism. How many channels that IPC needs and what data flows over these channels is an arbitrary decision by the authors on one or both sides of that IPC. The OS provides the IPC mechanism. The software that uses it creates some abstraction on top of it which doesn’t have to conform to any lower level OS models. Could we model Postgres tables and rows like files in a dir structure. Sure. There are pros and cons to using that model. Might not be great for terabyte scale db performance.
/dev/fb0 is the framebuffer. So yes, you can feed data into the filesystem and you’ll see it on your display.
For Unixoids, being a file does not mean that this data is stored on a hard disk, but that all data, processes and hardware are accessible with the same toolkit. /dev/fb0, for instance, is part of the file-like interface of your graphics card.
/dev/fb is mostly one thing: deprecated. Also it is not really a interface of your graphics card, it is a legacy way kindly still provided for pushing fullscreen pixels to your monitor in an unaccelerated fashion for things that have not made it to kms drm (which at this point is pretty much merely the console emulation on the TTYs). It is not an interface to the graphics card, because it doesn’t provide any capabilities a graphics card has (like shaders etc). In fact for just pushing pixels you can leave any graphics card completely out of your computer if you connect your screen by other means (think stuff like SPI which is common in embedded devices; you can find many examples of such drivers in the kernel source at drivers/gpu/drm/tiny ).
The thing is, Wayland does to through a UNIX socket on the file system. Just like X, actually. Well, not entirely, because sending everything through a FIFO would be terrible so there’s tons of shared memory in both X and Wayland, but in theory you could totally write an X/Wayland client that works completely through a socket.
X11 forwarding and Waypipe do exactly that, although Waypipe tends to default to remote GPU acceleration. And yes, that remote socket can be found in /proc/<pid> if you care to look.
The way X11 came together made it a very weird protocol (seriously, the concept of window managers isn’t some kind of revolutionary idea, it’s an addition to an older protocol (X10) that demanded you use the command line to place and resize windows, maybe with a few shortcuts to switch windows for you, like some kind of prehistoric i3). It’s designed to run one single application on your local computer, and everything else on a bunch of different computers connected through the network. This has been patched to hell and back to support GPUs and whatnot, but the base protocol was never designed to serve the way we use computers today (read: run applications on the machine in front of you).
Unix isn’t some kind of holy grail of computing. It took 11 versions of the protocol to become usable, designed by a team of people all working for different megacompanies. What worked for the PDP-11 doesn’t necessarily work when you add SSDs and WiFi. I’m honestly baffled X stayed around for so long, but I suppose there wasn’t really anything else out there when Linux gained popularity on the desktop.
At the end there’s a little jab towards Wayland:
As far as I’m aware nobody has even considered extending the file metaphor to the graphics stack, and it sounds a bit ridiculous to me.
It also reminds me of this talk that suggests maybe trying to express everything as a file might not be the best idea…
Plan9, more or less, does its graphics through filesystems.
Plan 9 is clearly what the article is talking about. Odd that they don’t name it.
It’s nonsense. The author arbitrarily decides on some expression of the windowing model in terms of files. OK cool. Every author of a system that uses files decides how to represent their data. E.g. how many files to use, sockets, what data to flow through each and what format that data should be represented in. Like why not go to the authors of Btrfs and argue why the data format /dev/btrfs-control is the way it is why it’s a single file instead of 5. It’s an arbitrary decision. When not used for storing data files in POSIX-like OSes are a type of IPC mechanism. How many channels that IPC needs and what data flows over these channels is an arbitrary decision by the authors on one or both sides of that IPC. The OS provides the IPC mechanism. The software that uses it creates some abstraction on top of it which doesn’t have to conform to any lower level OS models. Could we model Postgres tables and rows like files in a dir structure. Sure. There are pros and cons to using that model. Might not be great for terabyte scale db performance.
$ echo ffdd66 > /dev/display/3/349/1045
permission denied: /dev/display/3/349/1045
I have a 144Hz display. I’m sure my system would love every frame hitting the filesystem layer.
/dev/fb0 is the framebuffer. So yes, you can feed data into the filesystem and you’ll see it on your display.
For Unixoids, being a file does not mean that this data is stored on a hard disk, but that all data, processes and hardware are accessible with the same toolkit. /dev/fb0, for instance, is part of the file-like interface of your graphics card.
/dev/fb is mostly one thing: deprecated. Also it is not really a interface of your graphics card, it is a legacy way kindly still provided for pushing fullscreen pixels to your monitor in an unaccelerated fashion for things that have not made it to kms drm (which at this point is pretty much merely the console emulation on the TTYs). It is not an interface to the graphics card, because it doesn’t provide any capabilities a graphics card has (like shaders etc). In fact for just pushing pixels you can leave any graphics card completely out of your computer if you connect your screen by other means (think stuff like SPI which is common in embedded devices; you can find many examples of such drivers in the kernel source at drivers/gpu/drm/tiny ).
Here is an alternative Piped link(s):
this talk
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Wow that’s hilariously idiotic.
This was a great talk (video you linked, not the article). Wonder what Linus would say about C being a wrong thing today.
The thing is, Wayland does to through a UNIX socket on the file system. Just like X, actually. Well, not entirely, because sending everything through a FIFO would be terrible so there’s tons of shared memory in both X and Wayland, but in theory you could totally write an X/Wayland client that works completely through a socket.
X11 forwarding and Waypipe do exactly that, although Waypipe tends to default to remote GPU acceleration. And yes, that remote socket can be found in /proc/<pid> if you care to look.
The way X11 came together made it a very weird protocol (seriously, the concept of window managers isn’t some kind of revolutionary idea, it’s an addition to an older protocol (X10) that demanded you use the command line to place and resize windows, maybe with a few shortcuts to switch windows for you, like some kind of prehistoric i3). It’s designed to run one single application on your local computer, and everything else on a bunch of different computers connected through the network. This has been patched to hell and back to support GPUs and whatnot, but the base protocol was never designed to serve the way we use computers today (read: run applications on the machine in front of you).
Unix isn’t some kind of holy grail of computing. It took 11 versions of the protocol to become usable, designed by a team of people all working for different megacompanies. What worked for the PDP-11 doesn’t necessarily work when you add SSDs and WiFi. I’m honestly baffled X stayed around for so long, but I suppose there wasn’t really anything else out there when Linux gained popularity on the desktop.