(_____(_____________(#)~~~~~~

  • 0 Posts
  • 20 Comments
Joined 3 years ago
cake
Cake day: April 11th, 2022

help-circle
  • AI? Look, I helped a friend fix a new install. It wasn’t Linux fault, it was a setting in the bios that needed to be changed. But the AI had them trying all sorts of things that were unrelated, and was never going to help. Use with a grain of salt.

    I have the same experience but sometimes it was even worse; Sometimes the AI would confidently recommend doing things that might lead to breakage. Personally I recommend against using AI to learn Linux. It’s just not worth it and will only give new users a false impression of how things work on Linux. People are much better off reading documentation (actual documentation, not SEO slop on random websites) or asking for help in forums.





  • This reads like it was written by some LLM.

    Enable journaling only if needed:
    tune2fs -O has_journal /dev/sdX

    Don’t ever disable journaling if you value your data.

    Disk Scheduler Optimization
    Change the I/O scheduler for SSDs:
    echo noop > /sys/block/sda/queue/scheduler
    For HDDs:
    echo cfq > /sys/block/sda/queue/scheduler

    Neither of these schedulers exist anymore unless you’re running a really ancient Kernel. The “modern” equivalents are none and bfq. Also this doesn’t even touch on the many tunables that bfq brings.

    Also changing them like they suggest isn’t permanent. You’re supposed to set them via udev rules or some init script.

    SSD Optimization Enable TRIM:
    fstrim -v /
    Optimize mount settings:
    mount -o discard,defaults /dev/sdX /mnt

    None of this changes any settings like they imply.

    Optimized PostgreSQL shared_buffers and work_mem.
    Switched to SSDs, improving query times by 60%.

    No shit. Who would’ve thought that throwing more/better hardware at stuff will make things faster.

    EDIT: More bullshit that I noticed:

    Use ulimit to prevent resource exhaustion:
    ulimit -n 100000

    Again this doesn’t permanently change the maximum number of open files. This only raises the limit for the user who runs that command. What you’re actually supposed to do is edit /etc/security/limits.conf and then relog the affected user(s) (or reboot) to apply the new limits.

    Use compressed swap with zswap or zram:
    modprobe zram echo 1 > /sys/block/zram0/reset

    This doesn’t even make any sense.















  • Firefox does sandbox everything but vulnerabilities exist and sometimes go unnoticed for a while before they’re discovered and patched. If a malicious script does manage to escape the sandbox it will be able to do literally anything to the system since it has root privileges. It would have full access to any device that’s in /dev, it could create, modify and delete udev or iptables rules, it could mess with the BIOS since the kernel exposes EFI variables, if the mainboard has re-writable flash chips for the firmware it could write malicious code to them since they may show up in /dev, etc. If any of this makes you uneasy then you probably should stop running stuff as root in general except for when you really need to.

    Also in general you don’t want to run any graphical applications on a Server unless there is a very specific reason for it because it takes up extra resources and therefore makes the machine use more power overall. This is especially bad when the machine in question has no hardware acceleration and renders everything in software. Remote desktop also adds CPU/GPU load and takes up a good bit of I/O and network bandwidth which is not ideal for a NAS server.