

I appreciate the Simpsons reference, but my god not everything has to be a joke all the time. NASA being gutted is a travesty for everyone in every country.
I appreciate the Simpsons reference, but my god not everything has to be a joke all the time. NASA being gutted is a travesty for everyone in every country.
They’ll probably go to the private sector, or land a job at a foreign space agency like ESA.
you parents out there … clue me in, but isn’t this the pinnacle of irresponsibility, even on a cool day?
I wouldn’t leave my 1 year old alone for more than 5 minutes in the centre of a pillow fort in my house with the AC on (bad analogy, soft fluffy surfaces can be dangerous to small children if they can’t get their faces up reliably to get air).
There have been countless times when it’s a nice 18 degrees Celsius outside and I needed to run into the store to grab ONE thing. A total in and out time of maybe 3 minutes. I also live in a quiet and safe town. And yet each and every time, I took the effort to get my kid out of his car seat, carry him inside with me, get the stuff, and do the whole process of getting him into his seat, get him bucked in, get his toys set up again, etc.
I would throw myself off a cliff for being the worst parent imaginable if I left him in the car for those 3 minutes because I couldn’t make the effort.
This mother from the news didn’t deserve the child that died and neither of them deserved her as a mother, for all that term does any good here.
Whenever I screw something up or something goes sideways. Or when I’m migrating from one host to another.
Not really useless, it’s an extra layer of management (a good thing). The Proxmox system can be nearly static while giving you external level management of the OS that manages the containers.
I have a 3 server Proxmox cluster running various VMs doing different things. Some of those VMs are my container systems.
Besides, you can run containers directly on Proxmox itself.
I’ve never worked with buildpack, so that’s interesting
Sure, ZFS snapshots are dead simple and fast. But you’d need to ensure that each container and its volumes are created in each respective dataset.
And none of this is implying that it’s hard. The top comment was criticizing OP for using VMs instead of containers. Neither one is better than the other for all use cases.
I have a ton of VMs for various use cases, and some of those VMs are container/Docker hosts. Each tool where it works best.
Backups? I have an automatic job running every night.
It’s not the same. You then need to manage volumes separately from images, or if you’re mounting a host folder for the Jellyfin files then you have to manage those separately via the host.
Container images are supposed to be stateless. So then if you’re only banking up the volumes, then you need to somehow track which Jellyfin version it’s tied to, in case you run into any issues.
A VM is literally all of that but in a much more complete package.
I can backup an entire VM snapshot very quickly and then restore it in a matter of minutes. Everything from the system files, database, Jellyfin version and configs, etc. All easily backed up and restored in an easy to manage bundle.
A container is not as easy to manage in the same way.
So what’s going to happen is there will be a group that will exploit all the horribly insecure home routers out there with active CVEs that will never get patched. They will then use systems within those networks to do the torrenting, and then securely copy them away.
Then a ton of people will be accused of piracy just by their IPs and get disconnected.
I’m sure it’ll be fine…
YT’s blocked on it.
Just tried it. It works.
And yet whenever some achievement is made, the headlines are “Musk achieves great feat”
My personal problem is that I’m so bad at using it
It’s not you, it’s just that Windows is badly designed.
Not at all. It’s not “how likely is the next word to be X”. That wouldn’t be context.
I’m guessing you didn’t watch the video.
I’m not wrong. There’s mountains of research demonstrating that LLMs encode contextual relationships between words during training.
There’s so much more happening beyond “predicting the next word”. This is one of those unfortunate “dumbing down the science communication” things. It was said once and now it’s just repeated non-stop.
If you really want a better understanding, watch this video:
And before your next response starts with “but Apple…”
Their paper has had many holes poked into it already. Also, it’s not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn’t exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.
Apple’s paper on LLMs is completely biased in their favour.
it just repeats things which approximate those that have been said before.
That’s not correct and over simplifies how LLMs work. I agree with the spirit of what you’re saying though.
It’s been talked about to death. It’s been analysed to death.
But here’s a very detailed and thorough breakdown:
We work in 20 minute microsleep shifts