I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I’ll try my best to answer any questions here, but I hope others in the community will contribute too!

  • Kuvwert@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    I installed Debian today. I’m terrified to do anything. Is there a single button backup/restore I can depend on when I ultimately fuck this up?

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Another perspective: Your question implies you want to try out things with Debian. If this assumption is correct, I would highly recommend you just create a virtual machine with qemu/libvirt and learn within this environments/try out things there before doing stuff ‘on the metal’.

      Of course backups are always a good idea and once you got your feed wet you might want to learn about ‘Infrastructure as code’. Have fun!

      • Kuvwert@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        That’s a fantastic suggestion and I’ve already been doing exactly this :) but, I’ve done it just enough to know that I’m really really good at breaking stuff, and I don’t want to wait to fully transition from windows. Hence the need for full system backups

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Install everything from store, and you should be fine. If you see a tutorial being too complicated, it is probably not worth following. Set your search engine to past year and see if there are better tutorials.

      You might also want to consider atomic distros, they are much harder to mess up, and much easier to restore.

      • Kuvwert@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        No I’m doing it to learn self hosting, I’m doing the hard stuff on purpose

        • baseless_discourse@mander.xyz
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          7 months ago

          Oh! in that case may I suggest yachts with docker containers? https://yacht.sh/

          Everything on my homeserver is directly installed on the server, keeping them up-to-date is pretty annoying, and permission control is completely non-existent.

          Since want to do things the hard way, I believe this can also be a good opportunity to do things in the “better” way (at least IMO).

          • Kuvwert@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            Ah now that does look promising, I had settled on portainer but this yacht program looks very noob friendly! I’ll install it today and check it out! Cheers!

    • makingStuffForFun@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I ran Linux in a vm and destroyed it about… 5 times. It allowed me to really get in and try everything. Once I rana command that removed everything, and I remember watching icons disappear as the destruction unfolded in front of me. It was kind of fun.

      I have everything backed up and synced so it’s all fine. Just lots of reinstalling Thunderbird, Firefox, re logging into firefox sync, etc.

      Once I stopped destroying everything I did a proper install and haven’t looked back.

      This will be my 7th year on Linux now. And I have to say, it feels good to be free.

      • Julian@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        These have both saved my ass on numerous occasions. Btrfs especially is pretty amazing.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      You want a disk imager like clonezilla or something. If you’re not ready for that just show hidden files and copy your /home/your_username directory to a usb or something. That’s where all your files live.

  • hardaysknight@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    I bought a cheap Intel i226-v nic to use 2.5gbe in Unraid and it tries to auto configure to 100mbps. I realize now that the Intel 2.5gbe nics have issues, so is there anything I could do to get it to play nice, or does anyone know of a solid low profile 2.5gbe nic I could use without breaking the bank?

  • noughtnaut@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    How the hell do I set up my NAS (Synology) and laptop så that I have certain shares mapped when I’m on my home network - AND NOT freeze up the entire machine when I’m not???

    For years I’ve been un/commenting a couple of lines in my fstab but it’s just not okay to do it that way.

  • cosmicrookie@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    In the terminal, why can’t i paste a command that i have copied to the clipboard, with the regular Ctrl+V shortcut? I have to actually use the mouse and right click to then select paste.

    (Using Mint cinnamon)

    • Allero@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Due to some old school terminal things. Add shift to shortcut combinations, such as Ctrl+Shift+V to paste.

    • ArcaneSlime@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Try ctrl+shift+v, iirc in the terminal ctrl+v is used as some other shortcut (and probably has been since before it was standard for “paste” I’d bet).

      Also linux uses two clipboards iirc, the ctrl+c/v and the right click+copy/paste are two distinct clipboards.

    • r0ertel@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Old timer here! As many others replying to you indicate, Ctrl+C means SIGINT (interrupt running program). Many have offered the Ctrl+Shift+C, but back in my day, we used Shift+Insert (paste) and Ctrl+Insert (copy). They still work today, but Linux has 2 clipboard buffers and Shift+Insert works against the primary.

      As an aside, on Wayland, you can use wl-paste and wl-copy in your commands, so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      In Terminal land, Ctrl+C has meant Cancel longer than it’s meant copy. Shift + Insert does what you think Ctrl+V will do.

      Also, there’s a separate thing that exists in most window managers called the Primary buffer, which is a separate thing from the clipboard. Try this: Highlight some text in one window, then open a text editor and middle click in it. Ta da! Reminder: This has absolutely nothing to do with the clipboard, if you have Ctrl+X or Ctrl+C’d something, this won’t overwrite that.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      …because that would make Ctrl+C Cut/Copy and that would be really bad. It would kill whatever was running.

      So, it becomes Ctrl+Shift+C and paste got moved in the same way for consistency.

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      In most terminal you can actually override this behavior by changing keyboard shortcut. Blackbox even have a simple toggle that will enable ctrl+c v copy paste.

      Gnome console is the only terminal that don’t allow you to change this.

  • Jake [he/him]@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Any word on the next generation of matrix math acceleration hardware? Is anything currently getting integrated into the kernel? Where are the gource branches looking interesting for hardware pulls and merges?

  • Deconceptualist@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    I’m running Endeavour OS (KDE Plasma) and ran into a weird issue with my graphics. It’s like windows sometimes flicker and flight with each other, some fullscreen videos won’t play and just lock to a gray screen instead (e.g. in Steam, though YouTube is oddly fine), and most 3D games are super choppy and unplayable.

    I’m not asking how to fix this, I just want to know how I start troubleshooting! I haven’t done anything special with my system, and I think the issue started after a normal pacman update. My GPU is a GeForce GTX 1060.

    Any suggestions to get started? I don’t even know if the issue is Nvidia drivers, X, window manager, KDE, etc.

    • AnIndefiniteArticle@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Try switching to different versions of your graphics driver and/or kernel. Nvidia cards get really finicky about the version matchups, especially as they age. Try different combinations of the versions that are available via pacman, and maybe it’ll work. You may need to start keeping an eye on updates to your kernel and graphics driver to see if a new update fixes your issue. Welcome to life with an nvidia card. I bought an nvidia card once in 2013. By 2016 I had to start playing this game on upgrades. At one point, the graphics driver was causing kernel panics until I downgraded both and waited a few months. Very happy with AMD.

      • Deconceptualist@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Thanks, I’ll try that. I figured an update would fix it by now (it’s been a few weeks) but maybe I do need to roll back.

        And yes my other machine has an AMD card. This will be my last one from Nvidia since I’ve fully switched to Linux.

    • BananaTrifleViolin@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Start by checking what windowing system you’re using as its a fundamental part of problem solving. It’s a little confusing how to do this, the top answer in this Stack exchange thread works well.

      If you’re running the latest KDE then you’ve almost certainly been moved to Wayland and that will be the source of your problems. Wayland and Nvidia drivers don’t work well together, and KDE have defaulted to Wayland in the latest release. I have had very similar issues to you with the move to wayland and have not been able to fix them - they’re too fundamental and depend on updates to wayland and/or Nvidia drivers.

      I know you don’t want a solution but there isn’t one at the moment, so you’d be wasting your time. The solution is to log out, then on the log in screen select Plasma (X11) as your session and log in again.

      Personally I have had to abandon KDE as I get a different set of problems in X11. I’m on OpenSuSE Tumbleweed so have little choice inrolling back to the previously functioning version of KDE - I’m using Cinnamon instead and contemplating switching to a different Linux distro, probably OpenSuSE Leap in favour of stability over cutting edge.

      Meanwhile I have the latest KDE running on another device with AMD GPU without issue.

      In terms of when it’ll be fixed, there is a change being made to Wayland which will effect how it and the Nvidia drivers interact (something called Explicit sync). It’s just been merged into wayland so presumably will appear downstream in the coming next few months in rolling distributions. There have been articles suggesting this is going to fix most problems but personally I think this is a little brave but fingers crossed.

    • Pizzasgood@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Look in /var/log/Xorg.0.log for Xorg errors.

      Check if OpenGL is okay by running glxinfo (from the package mesa-utils) and checking in the first few lines for “direct rendering: Yes”.

      Check if Vulkan is okay by running vulkaninfo (from the package vulkan-tools) and seeing… if it throws errors at you, I guess. There are probably some specific things you could look for but I’m not familiar enough with Vulkan yet.

      You could sudo dmesg and read through looking for problems, but there might be a lot of noise to sift through. I’d start by piping it through grep -i nvidia to look for driver-specific stuff.

      Might be worth running nvidia-settings and poking around to see if anything seems amiss. Not sure what you’d actually be looking for, but yeah.

      Sometimes switching from linux and nvidia to linux-lts and nvidia-lts can help if the problem is in the kernel or driver. Remember to switch both of these at the same time, since drivers need to match the kernel.

      You could also try switching from the nvidia drivers to nouveau. Might offer temporary relief and help narrow down where the problem is, at the expense of probably worse performance in heavy games. Ought to be fine for 2D gaming and general desktopping.

      Trying a different window manager is always an option. Don’t know how much hassle that is when you use a full DE; I’ve always been the “just grab individual lightweight pieces and slap 'em together” sort so I don’t have any real experience with KDE. But yeah. Find out what the right way to change WM is for your system, then try swapping over to Openbox or something minimal like that and see what happens.

      Related to WM/DE, it could be an issue with the compositor maybe. Look up whatever KDE’s compositor is and see if you can turn it off and run a different one?

      • Deconceptualist@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        This looks super helpful, thanks!

        I’m a little nervous about swapping entirely over to nouveau for testing (well, moreso switching back) but I’m sure I can find a guide.

  • DosDude👾@retrolemmy.com
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    7 months ago

    Is there a way to remove having to enter my password for everything?

    Wake computer from Screensaver? Password.
    Install something? Password.
    Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.

    I understand sudo needs a password,but all the other stuff I just want off. The frequency is rediculous. I don’t ever leave my house with my computer, and I don’t want to enter a password for my wife everytime she wants to use it.

    • onlinepersona@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      These are all valid reasons to request a password 🤔

      • Wake computer from Screensaver? Password.

      Check your screen saver settings. Dunno which desktop environment you’re using. KDE should allow you to not enter a password for this.

      • Install something? Password.
      • Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.

      Installing stuff runs sudo in the background hence the password prompt. Updates = installing stuff. Look up “passwordless sudo”. At this point, when do you even want a password to be shown? If you don’t need a password, get rid of it entirely.

      Anti Commercial AI thingy

      CC BY-NC-SA 4.0

      • Stillhart@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        At this point, when do you even want a password to be shown? If you don’t need a password, get rid of it entirely.

        Do you still do this by just pressing enter when you change your password? (i.e. entering no password as your password)

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      You can configure this behavior for CLI, and by proxy could run GUI programs that require elevation through the CLI:

      https://wiki.archlinux.org/title/Sudo#Using_visudo

      Defaults passwd_timeout=0(avoids long running process/updates to timeout waiting for sudo password)

      Defaults timestamp_type=global (This makes password typing and it’s expiry valid for ALL terminals, so you don’t need to type sudo’s password for everything you open after)

      Defaults timestamp_timeout=10(change to any amount of minutes you wish)

      The last one may be the difference between having to type the password every 5 minutes versus 1-2 times a day. Make sure you take security implications into account.

    • lemmyreader@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I understand sudo needs a password

      You can configure sudo to not need a password for certain commands. Unfortunately the syntax and documentation for that is not easily readable. Doas which can be installed and used along side sudo is easier.

      For software updates you can go for unattended-upgrades though if you turn off your computer when it is upgrading software you may have to fix the broken pieces.

      • DosDude👾@retrolemmy.com
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        I’ve tried unattended-upgrades once. And I couldn’t get it to work back then. It might be more user friendly now. Or it could just be me.

        • lemmyreader@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          It’s not really user friendly, at least not how I know it. But useful for servers and when desktop computers are on for a long time. It would be a matter of enabling or disabling it with : sudo dpkg-reconfigure unattended-upgrades granted that you have the unattended-upgrades package installed. In that case I’m not sure when the background updates will start, though according to the Debian wiki the time for this can be configured.

          But with Ubuntu a desktop user should be able to configure software updated to be done automatically via a GUI. https://help.ubuntu.com/community/AutomaticSecurityUpdates#Using_GNOME_Update_Manager

    • dysprosium@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Asking the real question here. I hope there is a one way solution per application. But I doubt it. I hope you don’t get the usual answer that it’s “absolutely necessary” for security.

    • Bitrot@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      The things you listed can be customized.

      Disable screen lock and it stops locking. This is a setting in gnome, probably in KDE, maybe in others.

      Polkit can allow installing and updating in packagekit (like gnome software) without the password. I think this is default in Fedora, at least for the user marked as administrative. openSUSE actually has a gui for changing some of these privileges in the Security and Hardening settings.

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      7 months ago

      For wake from screensaver/sleep, this should be configurable. Your window manager is locking your session, so you probably just need to turn that option off.

      For installations and updates, I suspect you’re used to Windows-style UAC where it just asks you Yes or No for admin access in a modal overlay. As I understand it, this is easier said than done on linux due to an insistence on never running GUI applications as admin, which makes sense given how responsibilities are divided and the security and technical challenges involved. I will say, I agree 100% that this is a serious area that’s lacking for linux, but I also (think I) understand why no one has implemented something similar to UAC. I’ll try to give the shortest version I can:

      All programs (on both Windows and Linux) are run as a user. It’s always possible for any program to have a bug in it that gives another program to opportunity to exploit the bug to hijack that program, and start executing arbitrary, malicious code as that user. For this reason, the philosophical stance on all OSes is, if it’s gonna happen, let’s not give them admin access to the whole machine if we can avoid it, so let’s try to run as much as possible as an unprivileged user.

      On linux, the kernel-level processes and admin (root-level) account are fundamentally detached from running anything graphical. This means that it’s very hard to securely, and generically, pop up a window with just a Yes or No box to grant admin-level permissions. You can’t trust the window manager, it’s also unprivileged, but even if you could, it might be designed in a supremely insecure way, and allow just any app with a window to see and interact with any other app’s windows (Xorg), so it’s not safe to just pop up a simple Yes/No box, because then any other unprivileged application could just request root permissions, and then click Yes itself before you even see it. Polkit is possible because even if another app can press OK, you still need to enter the password (it’s not clear to me how you avoid other unprivileged apps from seeing the keystrokes typed into the polkit prompt).

      On windows, since the admin/kernel level stuff is so tightly tied to the specific GUI that a user will be using, it can overlay its own GUI on top of all the other windows, and securely pop in to just say, “hey, this app wants to run as admin, is that cool?” and no other app running in user mode even knows it’s happening, not even their own window manager which is also running unprivileged. The default setting of UAC is to just prompt Yes/No, but if you crank it to max security you get something like linux (prompt for the password every time), and if you crank it to lowest security you get something closer to what others are commenting (disable the prompt, run things as root, and cross your fingers that nothing sneaks in).

      I do think that this is a big deal when it comes to the adoption of linux over windows, so I would like to see someone come up with a kernel module or whatever is needed to make it happen. If someone who knows linux better than me can correct me where I’m wrong, I’d love to learn more, but that is how I understand it currently.

    • rollingflower@lemmy.kde.social
      link
      fedilink
      Deutsch
      arrow-up
      0
      ·
      7 months ago

      Passwords are meant to protect against using privileged processes as the user. This comes from a very traditional multi-user system, where users should not touch the system.

      If the actions that require authentication are supported by polkit (kde shows the ID when expanding the message) you can add a policy file in /etc/polkit-1/rules.d/

      Take this file as an example

    • Nibodhika@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I understand sudo needs a password,but all the other stuff I just want off.

      Sudo doesn’t need a password, in fact I have it configured not to on the computers that don’t leave the house. To do this open /etc/sudoers file (or some file inside /etc/sudoers.d/) and add a line like:

      nibodhika ALL=(ALL:ALL) NOPASSWD:ALL
      

      You probably already have a similar one, either for your user or for a certain group (usually wheel), just need to add the NOPASSWD part.

      As for the other parts you can configure the computer to not lock the screen (just turn it off) and for updates it depends on distro/DE but having passwordless sudo allows you to update via the terminal without password (although it should be possible to configure the GUI to work passwordless too)

    • ogeist@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      7 months ago

      For the memes:

      sudo rm -rf /*

      This deletes everything and is the most popular linux meme

      The same “expected” functionality:

      sudo rm -rf /bin/*

      This deletes the main binaries. You kinda can recover here but I have never done it.

    • SmashFaster@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.

      Some of what others have said is accurate, but to explain a bit further:

      Longer explanation:

      spoiler

      system32 is just some folder name the MS engineers came up back in the day.

      Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho

      The linux filesystem is well defined if you are inclined to research more about it.
      Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.

      https://tldp.org/LDP/intro-linux/html/sect_03_01.html

      tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”

      The basics:

      • /bin - base level executables, ls, mv, things like that
      • /sbin - super-level-only (root) executables, parted, reboot, etc
      • /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules, /lib/modules/*, similar to system32’s function of holding critical libraries
      • /etc - Configuration lives here, generally speaking, /etc/<application name> can point you in the right direction, typically requires super-user (root) to edit
      • /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables

      Bonus:

      • /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
      • /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use /srv/db for database volumes, /srv/www for web-data volumes, /srv/Media for large-file storage, etc, etc

      For completeness:

      • /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
      • /var - “Variable data”, basically meaning any data that will likely grow over time, eg: /var/log
    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      /bin, since that will include any basic programs (bash, ls, cd, etc.).

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      As in, the directory in which much of the operating system’s executable binaries are contained in?

      They’ll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.

    • Bitrot@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Don’t think there is.

      system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.

      The bash in /bin depends on libraries in /lib for example.

      • KISSmyOSFeddit@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        A weird catch-all folder for “most important Windows system stuff”. It’s not 32bit, just named like that in typical Windows fashion for backwards compatibility.

    • Bitrot@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.

      If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.

      • krash@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager

      In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc

      instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are

      and each package manager/distribution has an idea of where some files be stored

    • penquin@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I wish every single app installed in the same directory. Would make life so much easier.

        • Ramin Honary@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          They do! /bin has the executables, and /usr/share has everything else.

          Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in /usr/bin that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a .desktop file. The apps installed by the Linux distribution’s package manager are typically in /usr/share/applications, and each one points to one of the executables in /usr/bin or /usr/libexec. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.

          The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “PATH” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me 'OpenTTD` twice?”

          There is no easy solution, no one agreed-upon algorithm to keep things easy for end users who install apps from multiple other sources besides the default app store. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best way of solving this problem things.

        • penquin@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          7 months ago

          Not all. I’ve had apps install in opt, flatpaks install in var out of all places. Some apps install in /etc/share/applications

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            In /etc? Are you sure? /usr/share/applications has your system-wide .desktop files, (while .local/share/applications has user-level ones, kinda analogous to installing a program to AppData on Windows). And .desktop files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.

            • penquin@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              7 months ago

              OK, that was wrong. I meant usr/share/applications. Still, more than one place.

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                0
                ·
                7 months ago

                The actual executables shouldn’t ever go in that folder though.

                Typically packages installed through a package manager stick everything in their own folder in /usr/lib (for libs) and /usr/share (for any other data). Then they either put their executables directly in /usr/bin or symlink over to them.

                That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Expanding on the other explanations. On Windows, it’s fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.

      On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution’s package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.

      So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it’s not just one big self contained package you drop in C:\Program Files. Linux follows the FSH which roughly defines where things should be. Binaries go to /usr/bin, libraries to /usr/lib, shared files go to /usr/shared. A bunch of those locations are somewhat special, for example .desktop files in /usr/share/applications show up in the menu to launch them. That said Linux does have a location for big standalone packages: that’s usually /opt.

      There’s advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.

      The way you install software is your distros package manager or flatpak

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      different strokes.

      windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.

      if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program “should” be installed to already.

      but it didn’t used to be like that in the PC world! used to be your computer wasn’t a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren’t disks inside the machine, but big beige external disks that you’d plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn’t just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they’d even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.

      linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it’s where the os will look when you try to run something to see if it can run any program by that name. if a program isn’t installed in $PATH then when you type its’ name in and hit enter the computer won’t know what the hell youre talking about and you’ll have to type it’s whole ass location out and hit enter.

      Why didn’t unix systems that linux imitates ask you where to install stuff? because usually it wasn’t your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.

      so the assumption was that you’d have a variable in your user environment that would say where things were installed but not that you’d have the ability to change it or even install things.

      so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something’s installed at all?

      even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.

      so why the hell won’t linux systems give you the option of installing in a specific location or outside of $PATH altogether?

      they will, but unlike windows, they don’t ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.

  • blakeus12 [they/them, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    what is hyprland

    why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes

    what is wayland and xorg, and why does everyone argue about them

    • hello_hello [they/them, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      hyprland

      A wayland compositor and tiling window manager. The lead developer of the project is a Polish transphobic workaholic.

      why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes

      If you understand how shell scripting works you can easily automate menial tasks. CLI is also an interface shared by all operating systems so if you know how to work around in a shell you’re not bound to any particular workflow/desktop GUI. Keep using GUIs though, they exist for a reason.

      what is wayland and xorg, and why does everyone argue about them

      Both are display protocols that are in charge of displaying graphics to your screen. Xorg is over 30 years old while wayland is only about 15 years old. The polemic about xorg was that the codebase was unmanageable and the design architecture of the program was inherently flawed (example: screenlocker getting access to your entire screen including apps and desktop, making writing malware for x11 a 3 line python script). X11 was designed during a time when people were using actual real life terminals and mainframes. Wayland is much more modern and akin to how modern graphics APIs are handled (for the most part)

      Wayland at its core has and always will be design by committee so a lot of the arguing is necessary (though sometimes long-winded) to make sure to not repeat xorg’s mistakes. Protocols take months if not years to be merged into wayland and those protocols have to be implemented by wayland compositors themselves rather than sharing 1 program altogether like with xorg.

      Watch this video for more information, explains it much better and is from an actual wayland board member.

      Why YOU should write a Wayland compositor – Victoria Brekenfeld – HiP22 Berlin

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Xorg is a display server for Linux ecosystem. Every ecosystem has a display server. It is what makes it possible for you to have graphical applications with movable windows that can talk to each other, or have a mouse cursor that can click on things.

      Wayland is a replacement for Xorg because Xorg is old and its developers said an alternative is needed. Wayland has differences that I won’t discuss here, but I’ll be happy to do so if you ask.

      Hyprland is a wayland compositor. A compositor is basically an implementation of wayland (there are many) and gives you a windowing system that you can run graphical applications through. It is usually a lot more minimal than having a full graphical desktop like KDE or Gnome.

      Hyprland belongs to a class of comositors called “tiling”, which forces windows to be in a tiling formation. In other words, windows do not overlap or stack on top of each other. Hyprland stands out in having a lot of eye candy and visual effects.

      I use CLI for moving files, etc. After you use it for a while, you find out it can be more efficient, faster, and more pleasant to work with.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      it’s faster for me to type out cp -r source/directory destination/directory than it is to open a file manager, navigate to my source, ctrl-a ctrl-c navigate to my destination, ctrl-v. this is not always true. look at the work done by the plan9 people to learn more

      idk what hyprland is specifically, but it’s either a window manager or compositor or something for use with wayland.

      wayland and xorg are ways to do graphical user interfaces in unix systems. wayland is supposed to fix problems that have long been solved or worked around in xorg. it’s new and doesn’t workor support everything. xorg is old and has problems but it works very well.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Hyperland. Don’t know. Apparently reading someone else’s comment, it has to do with Wayland.

      Which leads to answering out of order about Wayland and Xorg. Both are windowing systems, major components of the GUI/desktop environment. Xorg, aka X or X11, is older than Linux; it dates back to the early 80’s. It just wasn’t designed to handle things like multiple monitors with variable refresh rate and all the wacky stuff we have now. It’s amazing it’s hung on this long but the sober fact is X is old and busted.

      Wayland is the new hotness meant to replace Xorg. It works a bit different, some old software won’t work with it so there have to be converters, and there’s some issues with Nvidia compatibility with Wayland. There are very few people who just want to stubbornly stay with X, but Wayland still doesn’t work well for their use case, which is why there is much discussion about it.

      I use the CLI for things like making and moving files for a lot of reasons.

      • I’m interacting with another machine through SSH
      • I’m maintaining a server that has no GUI installed
      • I’m doing something kind of weird like using scp to send a file from one computer to another via an SSH tunnel
      • I’m working on a large batch of files.
      • I’m doing something complex or multi-part to a bunch of files.

      For example, when I ripped my DVD collection, I had an issue where the software generated file names like S4D2E3.mp4, or Season 4 Disc 2 Episode 3. I was able to copy-paste a list of the episode names of an entire season into a text file, and then using the CLI I iterated through the lines of that file renaming each video file and moved it to the correct storage directory. Saved a lot of manual F2ing.

      Of course, I didn’t type those lines of bash each time, I saved it as a script and then ran that each time.

      Learn a little bit of regex, how to use vim, how pipes work, and a bit about stuff like imagemagick or pandoc or ffmpeg and you’ll see why Bash is so handy.

    • AnIndefiniteArticle@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      7 months ago

      The CLI has many advantages over a gui. For one, actions are reific, repeatable, and scriptable. This saves time as you can reuse previous commands and edit them appropriately for the current situation. This makes it easy to look back and verify what you have done. The command line is also a much more stable interface. GUIs change all the time and it’s hard to remember where things might be located. The structure of a Unix system from the command line facilitates the discovery of installed commands/programs and documentation. You can record these actions once and repeat them on many machines. You can script common activities (eg bulk file renaming) that make file and data management easier.

    • Eugenia@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      You can’t. Apple’s iPads and iPhones are e-waste from the moment they run out of security and OS updates. Apple doesn’t allow third party installations.

    • It’s technically possible, but there aren’t any proper instructions, I assume it’s pretty complicated and you could break the device while doing this. Also, I don’t think it would run particularly well, since there are no Linux drivers for Apple’s proprietary hardware (except for M1 because it was reverse-engineered, but iPads use A-Series chips)

  • snooggums@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I have windows PC with 6 drives, mostly SSD and on HDD that I assume are all NTFS. Two of the drives are nvme(?) attached to the mobo, and I only have one mobo with nvme slots. I have a number of older boards that top out at SATA connections.

    If I install Linux Mint, can I format one nvme drive with whatever the current preferred linux formatting is, install Mint, and move the files from the other drives around as I format each one?

    Or do I need to move all the data I want to keep to SATA drives, put them in a different windows box, and then copy them over using a network connection?

    It’s been a while and I’m guessing my lack of finding an answer means linux still doesn’t work with NTFS enough to do what I’m thinking of.

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      You can freely manipulate NTFS in Linux. Just make sure your distribution has, after kernel >=5.15, enabled it, otherwise you may need to install the ntfs-eg driver. Other than that, Ach Wiki has info that may help you on any distro:

      https://wiki.archlinux.org/title/NTFS

      I have done something similar to what you want to do, just needed the ntfs-3g driver installed and “Disks” (gnome disks) application would mount/read/write the disks as usual

    • NateSwift@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It depends on exactly how you plan to do things. The Linux kernel supports reading NTFS but not writing to it. I’m not sure exactly how full your drives are, but you might be able to consolidate some before installing Linux.

      There are a couple utilities that let your mount an NTFS file system for read & write, but I wouldn’t trust them for important data.

      • d3Xt3r@lemmy.nzM
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 months ago

        The Linux kernel supports reading NTFS but not writing to it.

        That’s not true. Since kernel 5.15, Linux uses the new NTFS3 driver, which supports both read and write. And performance wise it’s much better than the old ntfs-3g FUSE driver, and it’s also arguably better in stability too, since at least kernel 6.2.

        Personally though, I’d recommend being on 6.8+ if you’re going to use NTFS seriously, or at the very least, 6.2 (as 6.2 introduces the mount options windows_names and nocase). @snooggums@midwest.social

      • snooggums@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        As long as I can read from the second nvme drive I have enough total space to easily shuffle around.

        My issue was that I couldn’t fit everything onto just the SSDs at the same time.

        • NateSwift@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Reading works great! If you need to mount the drive manually (IIRC Mint should do this for you) you’ll need to specify that it’s NTFS instead of it automatically detecting the file system but other than that it’s just plug and play

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      linux can read and write ntfs, edit partition tables and resize ntfs partitions

      you could (theoretically, do not do this!) free up 8gb of space on your ssd in windows, defragment it then boot a linux installer and use it to shrink the ntfs partition and install ilnux in that 8gb.

    • Nibodhika@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I was read/writing on NTFS partitions back in 2004, so your information that Linux doesn’t work with NTFS is at least 20 years old.

  • Sabata11792@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    I am still blowing up my install pretty often.

    Other than the user folder, what else should I back up for a fast and painless reinstall next time I get too adventurous?
    What should I break next?
    Dose Nvidia hate me?
    How do I stop Windows from fucking up my BIOS boot order every time?

    • Lettuce eat lettuce@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Timeshift will save you soooooo much pain. Set it up to auto backup a daily image. You can also manually create as many snapshots as you want.

      Timeshift has turned system-destroying mistakes I’ve made into mere 5-10 minute inconveniences. You can use it in the command line, so even if you blow up your whole desktop environment/window manager, you can still restore back to a known gold state.

      I create a snapshot before any major updates or customizations.

      • Sabata11792@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        Both OS are on different drives so the boot loaders don’t see eachother. I don’t trust Window not to fuck up my entire drive. I got to select the drive from my BIOS every time. I may just pull the SATA cable unless my ass hat friends want to play league.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      you can’t stop windows from fucking up the bios. part of what makes a windows update “better” for everyone else is it fucking up the bios for you.

      you can make a bootable usb that you’re comfortable using and get familiar with pivoting root to your installed unbootable system and using it’s grub repair tools.

      i haven’t worked with a linux system that didn’t include an automated utility that allowed you to straighten grub out with one command as long as you can get to its environment in like 16 years…

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Timeshift, make sure to “include hidden files” to recover any configuration for desktop environments

      After a few mess ups, you may find yourself not needing to backup everything, only the file(s) that messed up, and that’s still a good thing to have Timeshift for