One foot planted in “Yeehaw!” the other in “yuppie”.

  • 1 Post
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Even on Windows, Proton drive is hot garbage. It never syncs my files correctly. Has a tendency to leave half encrypted uploads just lying around. Eating up desk space.

    Don’t even get me started on how long it takes to upload anything. Got a 1 GB file? Good luck!

    And that’s before getting into the fact that it’s proton’s third product. It was announced in 2019. 5 years and they still don’t have proton drive as a working product.

    Another gripe I have is that the Linux VPN client still doesn’t support wireguard. Sure, you can download wireguard configuration files. And they work just fine. But changing servers is a pain in the ass because of it.

    It’s made me seriously consider dropping my visionary plan and moving to a more competent provider.

    That being said, proton mail has been fantastic. And I have a ton of domains on it. So it would be a pain to move. I guess I’m just in a stalemate.


  • I understand the sentiment… But… This is a terribly reasoned and researched article. We only need to look at the NASA to see how this is flawed.

    Blown Capacitors/Resistors, Solder failing over time and through various conditions, failing RAM/ROM/NAND chips. Just because the technology has less “moving parts” doesn’t mean its any less susceptible to environmental and age based degradation. And we only get around those challenges by necessity and really smart engineers.

    The article uses an example of a 2014 Model S - but I don’t think it’s fair to conflate 2 Million Kilometers in the span of 10 years, vs the same distance in the span of the quoted 74 years. It’s just not the same. Time brings seasonal changes which happen regardless if you drive the vehicle or not. Further, in many cases, the car computers never completely turn off, meaning that these computers are running 24/7/365. Not to mention how Tesla’s in general have poor reliability as tracked by multiple third parties.

    Perhaps if there was an easy-access panel that allowed replacement of 90% of the car’s electronics through standardized cards, that would go a long way to realizing a “Buy it for Life” vehicle. Assuming that we can just build 80 year, “all-condition” capacitors, resistors, and other components isn’t realistic or scalable.

    Whats weird is that they seem to concede the repairability aspect at the end, without any thought whatsoever as to how that impacts reliability.

    In Conclusion: A poor article, with a surface level view of reliability, using bad examples (One person’s Tesla) to prop up a narrative that EVs - as they exist - could last forever if companies wanted.


  • yeah…

    They asked for easy, or newbie friendly - and didn’t particularly mention privacy concerns.

    Other than that, if they don’t have a port 80/433 ingress from their ISP there are scarce simple solutions that don’t require another server that also needs management, either by them or a corporate entity.

    back when i was on a DOCSIS modem, i noticed concurrent downloads would disrupt uploads and vice versa. i think this may depend on the type of connection OP has.

    I used to work at a cable company, that was either a problem that people with low SNR had. Either from external factors (tree branch on a cable line) or in-home ones (bad splitter). A modem will ramp up it’s gain in order to offset this (to a point), and in so doing, create a lot more interference between channels. OR they were hitting their ingress rate limit (which is quite agressive on residential plans because DDOS’es). It’s surprisingly easy to hit your ingress rate limit for modern http/https webservers hosting complex web apps. Lots of concurrent connections open up to try to download all the resources when you go to any website in a modern browser and while it’s not a TON of data, the short period of time causes the traffic to easily hit the PPS/BPS rate limit that ISPs employ.

    But yeah, it all depends on the ISP.


  • I’d argue that the cloudflared daemon is even easier to use than a static wire guard or openvpn tunnel. It’s basically set and forget. The downside is that you must use cloudflare. This may, or may not be a big deal depending on OPs needs.

    I moved from a place with symmetrical gigabit to “gigabit cable” with 30mbps upload, it definitely wasn’t good enough for my small family. Photos are quite large these days - not to mention videos. Though it likely has a lot more to do with the bandwidth shaping my ISP does than the 30mbps rate.

    Also agree that it’s not perfect, but very likely the most newbie friendly solution at the moment. Especially from a deployment scenario vs going piecemeal.



  • The best “bang for the buck” in your use-case is to use Nextcloud - Nextcloud Talk is your Jitsi replacement, and the files feature can be extended with the Nextcloud Photos plugin (https://github.com/nextcloud/photos).

    As for your domain question:

    1. You should use any computer you’d like that meets the Nextcloud recommendations, the key is of course isolating this machine on your home network so any “funny business” stays on the server. You can do this with VLANs or an entirely separate LAN connected to a different WAN (ISP).

    2. Many places, I like porkbun.com for real custom domains for cheap, but for your use case, you might be able to use a Dynamic DNS provider for free. It just likely won’t be an easy to remember URL (or at least, as easy as a root domain only). If you have a newer ASUS or Netgear router/modem they both have Dynamic DNS built in and you can select from a few different providers that have both free and paid tiers. ALSO it might be better to use Google Domains (now squarespace domains) since, IIRC, many DynDNS configs for routers support Google Domains too. Cloudflare can also be a decent registrar, and I’d recommend using them if you use any other cloudflare services (see below).

    3. Other things to consider: Your ISP may block port 80, meaning lots of issues. If this is the case, you might want to use a tunnel of some sort. Cloudflare has a great solution here. Even if they don’t block port 80, they may aggressively throttle and shape your incoming traffic - causing issues. Again, the tunnel is a good solution here. And, of course, your upload bandwidth matters a lot, you’ll need something around 100Mbps upload for a decent experience when accessing your stuff over the internet. The 30Mbps that’s typical of DOCSIS modems won’t cut it. Outside of these concerns it’s all about making sure you isolate your server from your “home stuff” to keep things secure.



  • I mean sure maybe 10 years ago. But most static sites like blogs and such can fit entirely on a cloudflare page worker under the free tier. Or heck, even the free allotment on AWS S3 or other object storage providers.

    I mean, perhaps this isn’t a static site and it’s built on some sort of CMS and has a postgres database in the background. In that case it probably runs around $5 to $10 a month.

    Of course, this all presumes that the person setting this up is fairly savvy about the offerings available. I see a lot of people making silly decisions in this space, thinking that they need some full fat virtual private server, when all they really need is an object storage bucket behind a DNS c-name.


  • I guess I didn’t really see the pressure that they were under.

    I hope they heal! But it’s a bummer that such an excellent resource will be taken down.

    I wish more creators were willing to hand their creations to someone who wishes to continue it. But oftentimes, I fear that it’s far too entwined with a person’s identity for that to be common occurrence.



  • He did this thing where he unified his shell history across thousands of hosts - it was super handy given our extensive use of Ansible playbooks and database managment commands. He could then use a couple hotkeys to query this history within a new open document. Super handy for writing out shell command steps or wrapping things in a bash script you’re working on. Unfortunately I don’t really have a link to HOW to do this, I just remember thinking “Oh my god, that would save me SO much time”.

    Nowadays, I just have this giant document with hundreds of our runbook commands and enable Github Copilot to make it SUPER easy to do the same thing without establishing an SSH session in the backend.







  • On a technical level, user count matters less than the user count and comment count of the instances you subscribe to. Too many subscriptions can overwhelm smaller instances and saturate a network from the perspective of Packets Per Second and your ISPs routing capacity - not to mention your router. Additionally, most ISPs block traffic traffic going to your house on Port 80 - so you’d likely need to put it behind a cloudflare tunnel for anything resembling reliability. Your ISP may be different and it’s always worth asking what restrictions they have on self-hosted services (non-business use-cases specifically). Otherwise going with your ISP’s business plan is likely a must. Outside of that, yes, you’ll need a beefy router or switch (or multiple) to handle the constant packets coming into your network.

    Then there’s a security aspect. What happens if you’re site is breached in a way that an attacker gains remote execution? Did you make sure to isolate this network from the rest of your devices? If not, you’re in for a world of hurt.

    These are all issues that are mitigated and easier to navigate on a VPS or cloud provider.

    As for the non-technical issues:

    There’s also the problem of moderation. What I mean by that is that, as a server owner you WILL end up needing to quarantine, report, and submit illegal images to the authorities. Even if you use a whitelist of only the most respectable instances. It might not happen soon, but it’s only a matter of time before your instance happens to be subscribed to a popular external community while it gets a nasty attack. Leaving you to deal with a stressful cleanup.

    When you run this on a homelab on consumer hardware, it’s easier for certain government entities to claim that you were not performing your due diligence and may even be complicit in the content’s proliferation. Now, of course, proving such a thing is always the crux, but in my view I’d rather have my site running on things that look as official as possible. The closer it resembles what an actual business might do, the better I think I’d fare under a more targeted attack - from a legal/compliance standpoint.