Aerial pic of my friend running archlinux in my company where everyone is using W11
Aerial pic of my friend running archlinux in my company where everyone is using W11
Why do they struggle so much with some “obvious things” sometimes ? We wouldn’t have a type-C iphone if the EU didn’t pressured them to do make the switch
Damn, that’s a very elegant way to put it
Left side: Black mirror S01E02 “fifteen million merits” . A guy tries to “break the system” but this backfires and his critic that was supposed to change people’s minds is absorbed by it and turned into an entertainment product
Right side: “Being ugly : My Experience” A youtube video of a guy explaining his experience being ugly
Actually, the tweet is wrong, you can always be getting a result above average in a series of numbers as long as the nth number is significantly greater than the previous ones. For example, f(x) = x^2 would always be above average for every next number
I was going to say this, their new architecture seems to be better than previous ones, they have more compute and I’m guessing, more data. The only explanation for this downgrade is that they tried to ban porn. I haven’t read online info about this at the time anyways, I’m just learning this recently
Interestingly enough, even if it would make sense that boeing is now fully focusing on improving quality, it also makes sense to me that airbus must be ensuring and pushing a lot of quality upgrades as well, it would be perfect marketing for them if no mistakes whatsoever happened on airbus’s planes
Are they going to integrate mastodon instead?
“but I don’t need privacy, I don’t have anything to hide!”
What would be the solution? Re-solder some chip from the motherboard?
Right but, AFAIK glaze is targeting the CLIP model inside diffusion models, which means any new versions of CLIP would remove the effect of the protection
I do think that the concept of recall is very interesting, I want to explore a FOSS version where you have complete ownership of your data in a secure manner
… oh…
Should I delete this post? Hahah
I’m using LLMs to parse and organize information in my file directory, turning bank receipts into json files, I automatically rename downloaded movies into a more legible format I prefer, I summarize clickbaity-youtube-videos, I use copilot on vscode to code much faster, chatGPT all the time to discover new libraries and cut fast through boilerplate, I have a personal assistant that has access to a lot of metrics about my life: meditation streak, when I do exercise, the status of my system etc and helps me make decisions…
I don’t know about you but I feel like I’m living in an age of wonder
I’m not sure what to say about the prompts, I feel like I’m integrating AI in my systems to automate mundane stuff and oversee more information, I think one should be paid for the work and value produced
This
AI is actually providing value and advancing to a huge rate, I don’t know how people can dismiss that so easily
I feel like “most people” only learn “one technology per category”. They know of, one operative system, one browser, one app to mindless scroll, one program to edit text. As a developer it shocks me a little because I’m always eager to try new programming languages, technologies and ways to interact with things. I guess most people only know about edge/safari because they come pre-installed
It’s weird that I’ve been on firefox for the vast majority of my life and I always had this perception that “everyone” was using it. Here in lemmy you hear about it all the time, my friends use it, I see it on my newsfeeds etc
But when you check the market share it around 2.8% while chrome is 65.1% https://gs.statcounter.com/browser-market-share
That argument it’s fallacious and reductionist, I’m not denying the situation it’s messed up, but objectively speaking we all have 0 idea about who’s making what decisions and how this google search shitstorm was caused
People get very confused about this. Pre-training “ChatGPT” (or any transformer model) with “internet shitposting text” doesn’t cause them to reply with garbage comments, bad alignment does. Google seems to have implemented no frameworks to prevent hallucinations whatsoever and the RLHF/DPO applied seems to be lacking. But this is not “problem with training on the entire web”. You can pre-train a model exclusively on a 4-chan database that with the right finetuning you would see a perfectly healthy and harmless model. Actually, it’s not bad to have “shitposting” or “toxic” text in the pre-training because that gives the model an ability to identify it and understand it
If so, the “problem with training on the entire web” is that we would be drinking from a poisoned well, AI-generated text has a very different statistical distribution from the one users have, which would degrade the quality of subsequent models. Proof of this can be seen with the RedPajama dataset, which improves the scores on trained models simply because it has less duplicated information and is a more dense dataset: https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama
Why is this on shitpost? I think it’s a perfectly valid hobby and it should be celebrated