

It has a green lock icon with the word “Private” next to it so it’s fine bro.
(_____(_____________(#)~~~~~~
It has a green lock icon with the word “Private” next to it so it’s fine bro.
arch-meson
is a small wrapper script for meson
:
$ cat /usr/bin/arch-meson
#!/bin/bash -ex
# Highly opinionated wrapper for Arch Linux packaging
exec meson setup \
--prefix /usr \
--libexecdir lib \
--sbindir bin \
--buildtype plain \
--auto-features enabled \
--wrap-mode nodownload \
-D b_pie=true \
-D python.bytecompile=1 \
"$@"
deleted by creator
This reads like it was written by some LLM.
Enable journaling only if needed:
tune2fs -O has_journal /dev/sdX
Don’t ever disable journaling if you value your data.
Disk Scheduler Optimization
Change the I/O scheduler for SSDs:
echo noop > /sys/block/sda/queue/scheduler
For HDDs:
echo cfq > /sys/block/sda/queue/scheduler
Neither of these schedulers exist anymore unless you’re running a really ancient Kernel. The “modern” equivalents are none
and bfq
. Also this doesn’t even touch on the many tunables that bfq
brings.
Also changing them like they suggest isn’t permanent. You’re supposed to set them via udev rules or some init script.
SSD Optimization Enable TRIM:
fstrim -v /
Optimize mount settings:
mount -o discard,defaults /dev/sdX /mnt
None of this changes any settings like they imply.
Optimized PostgreSQL shared_buffers and work_mem.
Switched to SSDs, improving query times by 60%.
No shit. Who would’ve thought that throwing more/better hardware at stuff will make things faster.
EDIT: More bullshit that I noticed:
Use ulimit to prevent resource exhaustion:
ulimit -n 100000
Again this doesn’t permanently change the maximum number of open files. This only raises the limit for the user who runs that command. What you’re actually supposed to do is edit /etc/security/limits.conf
and then relog the affected user(s) (or reboot) to apply the new limits.
Use compressed swap with zswap or zram:
modprobe zram echo 1 > /sys/block/zram0/reset
This doesn’t even make any sense.
Imagine defending this guy. I will never understand people who like influencers.
I ran into the same issue a few weeks ago. In my case I didn’t need real-time updates but I still needed to bulk insert data, which Postgres is terrible at (especially when dealing with tens of millions of rows). I just ended up using MariaDB (since that was my first exposure to SQL and I don’t remember having issues with it) and turns out it can handle bulk inserts a lot better without slowing down much. I wish PostgreSQL was better.
That’s literally what I’m saying; It’s fine as long as there wasn’t any unwritten data in the cache when the machine crashes/suddenly loses power. RAID controllers have a battery backed write cache for this reason, because traditional RAID5/6 has the same issue.
How’s the performance compared to other filesystems? Last benchmark I’ve seen it performed pretty poorly compared to btrfs.
I had a drive where data would get silently corrupted after some time no matter what filesystem was on it. Machine’s RAM tested fine. Turned out the write cache on the drive was bad! I was able to “fix” it by disabling the cache via hdparm
until I was able to replace that drive.
BTRFS RAID5/6 is fine as long you don’t run into a scenario where your machine crashes and there was still unwritten data in the cache. Also write performance sucks and scrubbing takes an eternity.
I agree. There is literally 0 reason to buy anything from Apple when there are much better and much cheaper options that are already well supported by GNU/Linux. I will never understand people who will go out of their way to waste money on the next big thing from Apple only to get Linux on it.
It’s not necessary, but a good thing to have if something goes wrong and you want to debug/monitor something. It’s really up to you and your needs.
Gallium-Nine also tends to be buggy if used with 32-bit software in particular. All the 32-bit games I’ve tried have problems with it. They usually work fine for the first 30-60 minutes and after that the framerate becomes unstable to the point where the game becomes unplayable. It happens consistently with Gallium-nine but not at all with DXVK.
US Imperials about to seethe
Kepler cards work “OK” with nouveau. What sucks is that reclocking has to be done manually, video decoding/encoding requires firmware blobs and OpenGL support tends to be meh. Overall it’s an unstable experience. I have a stack of Kepler based cards that would still be usable if Linux/mesa had a decent driver.
Or the buggy Bloom effect in Cities Skylines, Stellaris and Surviving Mars that would cause flicker and a weird black screen. Pretty sure they never bothered to fix that.
Firefox does sandbox everything but vulnerabilities exist and sometimes go unnoticed for a while before they’re discovered and patched. If a malicious script does manage to escape the sandbox it will be able to do literally anything to the system since it has root privileges. It would have full access to any device that’s in /dev, it could create, modify and delete udev or iptables rules, it could mess with the BIOS since the kernel exposes EFI variables, if the mainboard has re-writable flash chips for the firmware it could write malicious code to them since they may show up in /dev, etc. If any of this makes you uneasy then you probably should stop running stuff as root in general except for when you really need to.
Also in general you don’t want to run any graphical applications on a Server unless there is a very specific reason for it because it takes up extra resources and therefore makes the machine use more power overall. This is especially bad when the machine in question has no hardware acceleration and renders everything in software. Remote desktop also adds CPU/GPU load and takes up a good bit of I/O and network bandwidth which is not ideal for a NAS server.
From what I understand it’s basically like a “thin client” type of thing where the client loads the Kernel from local storage up to a certain point and then boots into a rootfs that is somewhere else on a remote server.
I have the same experience but sometimes it was even worse; Sometimes the AI would confidently recommend doing things that might lead to breakage. Personally I recommend against using AI to learn Linux. It’s just not worth it and will only give new users a false impression of how things work on Linux. People are much better off reading documentation (actual documentation, not SEO slop on random websites) or asking for help in forums.