Every community I care about is dead

  • 4 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle




  • Everyone fully missing the point here. This is the banner image for !linux@programming.dev (that’s not where we are right now for the record), and it has a normal JPEG size of 7.7MB. When it’s served as WebP it’s 3.8MB. OP is correct that this is very stupid and wasteful for a web content image. It’s a triple-monitor 1440p wallpaper that’s used verbatim, and it should instead be compressed down to be bandwidth-friendly. I was able to get it to 1.4MB at JPEG quality 80, and when swapping it out in dev tools and performing A/B testing I can’t tell the difference. This should be brought to the attention of a mod on that community so it can stop sucking people’s data for no reason.


  • JXL is the best image codec we have so far and it’s not even close. I did a breakdown on some of its benefits here. JXL can losslessly convert PNG, JPG, and GIF into itself, and can losslessly send them back the other way too. The main downside is that Google has been blocking its adoption by keeping support out of Chromium in favor of pushing AVIF, which started a chicken and egg problem of no one wanting to use it until everyone else started using it too. If you want to be an early adopter you can feel free to use JXL, but just know that 3rd party software support is still maturing.

    Something you might find interesting is that the original JPEG is such a badass format that they’ve taken a lot of their findings from JXL and made a badass JPEG encoder with it named jpegli. Oddly, jpegli-based JPEGs are not yet able to be losslessly-compressed into JXL files, per this issue - hopefully that will be fixed at some point.








  • I’m not trans but I am gay so you should value my opinion at 50%.

    how do you choose a distro? Do you just know, or do you have to try them all?

    Pick a reputable one, use it for a long time, figure out what you like/don’t like about it, and see if any distros offer alternatives. Most distros offer 95% of the same thing, and the last part is usually down to the out-of-box experience, software availability, and how stable/bleeding-edge the software availability is. I always recommend Linux Mint to get started with since it’s Debian-based (wide software compatibility, stable software updates, and the most typical/“normal” type of Linux distro without any gimmicks) and has a good reputation. You can almost always customize any distro to look and feel like any other distro, and they’re more similar to “preconfigured installs” than “closed-off/unique ecosystems”.

    Is there some place where you can try distros on for size without the trouble and risk of migrating multiple times?

    Try installing and running distros in a VM, e.g. VirtualBox (I don’t know what the best one for Windows is). VMs act like an emulated computer and you can get the full experience of what an install will be like and how it will look/feel without giving it any real hardware.

    How do I know if Linux is right for me? How do I know Windows is wrong? If I loathe my user experience with Windows, is that the fault of Windows or just me? If Linux starts feeling comfortable, how do I know it’s because I’ve made the right choice and it’s not just inertia setting in? Does that even matter?

    It depends on your values, but a lot of people simply use Linux because it is open source and community-driven, whereas Microsoft wants nothing more than to track you and give you as many ads as you’ll tolerate. You can customize literally every part of Linux, and so I really like it because I’m a control freak and if I don’t like the way something works I can change it. On Windows you get what you get and every year you get less tunables.

    I’m at least good with Windows, but I lack the intuition of the average Linux user. Could I really master Linux the way I have Windows, or would my awkward personality relegate me to being a permanent tourist?

    You’ll get comfortable quickly if you use a newbie-friendly distro. Linux is different that Windows in a lot of ways but it’s not always in a good way or bad way, just different. My guess is that you’ll actually become much better at Linux than Windows, because Windows tries its hardest to make the computer seem like “magic” and prevent you from understanding what’s going on, whereas Linux lets you open and modify anything you want and even gives you documentation on how to do it. Nothing in Linux will ever tell you “no” (so be careful!).

    Is my hardware too old to start tinkering with OSs?

    Linux runs on fuckin anything. Windows is like “mmmm your hardware is 4 years old sorry you can’t run Windows 11!” whereas Linux is like “does it have a CPU?”

    I know your choice of OS should take priority over your programs, as long as those programs aren’t vital, but I have a full Steam library and don’t look forward to losing any old friends. Can I partition my drive? Is that worth the trouble, switching from OS to OS depending on circumstances? I hear some distros these days can run some windows programs, and that you don’t have to leave your old programs behind the way you used to, but can I count on that trend continuing?

    IMO partitioning drives and dual-booting can make things complicated for a new user but if you aren’t sure if you want to stay you might want to do it anyway. Games run very well on Linux in general, with notable games that don’t work being listed here, and specific games listed here (Gold/Platinum is good). Linux (not based on distro) is very good at running Windows programs by using a compatibility layer named “Wine”, but there are notable exceptions. Generally you should try to run very few Windows programs that aren’t games, and you’ll have the best experience by finding open source alternatives to common programs.

    Will losing touch with the Windows environment make it more difficult for me to succeed in a Windows-dominated career?

    That depends on how extensively your career revolves around Windows. IMO Windows and Linux are more similar than different, and if you’re just being forced to use Windows to run some normal workflows you’re not going to feel any culture shock. If your career revolves around help desk or something you might lose touch with troubleshooting tips.

    All that said, I think you’ll find Linux easier to use than you think. Linux itself has very few actual flaws at the moment, and most of the friction is because some popular programs don’t have Linux versions. Make a list of all the programs you use, see if they have Linux versions, and look for alternatives if they don’t. Also make a list of all the games you want to play and check ProtonDB to see how compatible they are.



  • Mirrored vdevs allow growth by adding a pair at a time, yes. Healing works with mirrors, because each of the two disks in a mirror are supposed to have the same data as each other. When a read or scrub happens, if there’s any checksum failures it will replace the failed block on Disk1 with Disk2’s copy of that block.

    Many ZFS’ers swear by mirrored vdevs because they give you the best performance, they’re more flexible, and resilvering from a failed mirror disk is an order of magnitude faster than resilvering from a failed RAIDZ - leaving less time for a second disk failure. The big downside is that they eat 50% of your disk capacity. I personally run mirrored vdevs because it’s more flexible for a small home NAS, and I make up for some of the disk inefficiency by being able to buy any-size disks on sale and throw them in whenever I see a good price.


  • The main problem with self-healing is that ZFS needs to have access to two copies of data, usually solved by having 2+ disks. When you expose an mdadm device ZFS will only perceive one disk and one copy of data, so it won’t try to store 2 copies of data anywhere. Underneath, mdadm will be storing the two copies of data, so any healing would need to be handled by mdadm directly instead. ZFS normally auto-heals when it reads data and when it scrubs, but in this setup mdadm would need to start the healing process through whatever measures it has (probably just scrubbing?)


  • ZFS can grow if it has extra space on the disk. The obvious answer is that you should really be using RAIDZ2 instead if you are going with ZFS, but I assume you don’t like the inflexibility of RAIDZ resizing. RAIDZ expansion has been merged into OpenZFS, but it will probably take a year or so to actually land in the next release. RAIDZ2 could still be an option if you aren’t planning on growing before it lands. I don’t have much experience with mdadm, but my guess is that with mdadm+ZFS, features like self-healing won’t work because ZFS isn’t aware of the RAID at a low-level. I would expect it to be slightly janky in a lot of ways compared to RAIDZ, and if you still want to try it you may become the foremost expert on the combination.


  • ZFS without redundancy is not great in the sense that redundancy is ideal in all scenarios, but it’s still a modern filesystem with a lot of good features, just like BTRFS. The main problem will be that it can detect data corruption but not heal it automatically. Transparent compression, snapshotting, data checksums, copy-on-write (power loss resiliency), and reflinking are modern features of both ZFS/BTRFS, and BTRFS additionally offers offline-deduplication, meaning you can deduplicate any data block that exists twice in your pool without incurring the massive resources that ZFS deduplication requires. ZFS is the more mature of the two, and I would use that if you’ve already got ZFS tooling set up on your machine.

    Note that the TrueNAS forums spread a lot of FUD about ZFS, but ZFS without redundancy is ok. I would take anything alarmist from there with a grain of salt. BTRFS and ZFS both store 2 copies of all metadata by default, so bitrot will be auto-healed on a filesystem level when it’s read or scrubbed.

    Edit: As for write amplification, just use ashift=12 and don’t worry too much about it.


  • ZFS doesn’t eat your SSD endurance. If anything it is the best option since you can enable ZSTD compression for smaller reads/writes and reads will often come from the RAM-based ARC cache instead of your SSDs. ZFS is also practically allergic to rewriting data that already exists in the pool, so once something is written it should never cost a write again - especially if you’re using OpenZFS 2.2 or above which has reflinking.

    My guess is you were reading about SLOG devices, which do need heavier endurance as they replicate every write coming into your HDD array (every synchronous write, anyway). SLOG devices are only useful in HDD pools, and even then they’re not a must-have.

    IMO just throw in whatever is cheapest or has your desired performance. Modern SSD write endurance is way better than it used to be and even if you somehow use it all up after a decade, the money you save by buying a cheaper one will pay for the replacement.

    I would also recommend using ZFS or BTRFS on the data drive, even without redundancy. These filesystems store checksums of all data so you know if anything has bitrot when you scrub it. XFS/Ext4/etc store your data but they have no idea if it’s still good or not.



  • This is only a problem because lemmy.world has become one of the centralized hubs for Lemmy, which means that jettisoning them has a larger impact. The failing of lemmy.world is a reminder that we should be intentionally spreading out to smaller instances, that way a bad admin/instance can be cut off without losing much value. Additionally, by lemmy.world/lemmy.ml/etc having such a grip on the core of Lemmy, they are emboldened to make bad changes without fearing consequences.