

it’s listed on the project’s readme!


it’s listed on the project’s readme!


i think they’re talking about the proper old visual studio, a full-blown IDE!


well, the point of flatpak is to have bundled dependencies so they run predictably no matter the distro
if one of your software’s dependency gets updated, and your software isn’t, you may run into issues - like a function from the library you’re using getting removed, or its behaviour changing slightly. and some distros may also apply patches to some of their library that breaks stuff too!
often, with complex libraries, even when you check the version number, you may have behavioural differences between distros depending on the compile flags used (i.e. some features being disabled, etc.)
so, while in theory portable builds work, for them to be practical, they most often are statically linked (all the dependencies get built into the executable - no relying on system libraries). and that comes with a huge size penalty, even when compared to flatpaks, as those do have some shared dependencies between flatpaks! you can for example request to depend on a specific version of the freedesktop SDK, which will provide you with a bunch of standard linux tools, and that’ll only get installed once for every package you have that uses it


it depends™
what are you protecting yourself against?
for my use case, that’d be good enough, i don’t want my school/building admins to snoop on the websites i visit, and don’t want to fear academic repercussions for torrenting and such
though if you think your government is out to get you, then tunneling to another country is probably best!


“AI” today mostly refers to LLMs, and whichever LLM you’re using, you’ll likely face the same issues (wrong answers creeping in, tending towards mediocrity in its answers, etc.) - those seem to be things you have to live with if you want to use LLMs. if you know you can’t deal with it, another rebrand won’t help anything


it sure seems like it though
i mean, they’ll never replace system package manager, but for desktop applications, flatpak is honestly quite good


wow is me, i am le surprised


i mean, the main issue is that theologues base their beliefs on the belief that some old texts hold universal truths


according to the github readme, you can just run sudo pro config set apt_news=false to disable those
if you have things set up the way you like on xubuntu, it’s maybe worth it to just do that rather than start fresh
iirc, postgresql renames itself in htop to show its current status and which database it’s operating on


the avreage person also isn’t as convincing as a bot we’re told is the peak of computer intelligence


there are tons of webring still going these days!


well, i just tried it, and its answer is meh –
i asked it to transcribe “zenquistificationed” (made up word) in IPA, it gave me /ˌzɛŋˌkwɪstɪfɪˈkeɪʃənd/, which i agree with, that’s likely how a native english speaker would read that word.
i then asked it to transcribe that into japaense katakana, it gave me “ゼンクィスティフィカションエッド” (zenkwisuthifikashon’eddo), which is not a great transcription at all - based on its earlier IPA transcription, カション (kashon’) should be ケーシュン (kēshun’), and the エッド (eddo) part at the end should just, not be there imo, or be shortened to just ド (do)


it is absolutely capable to come up with it’s own logical stuff
interesting, in my experience, it’s only been good at repeating things, and failing on unexpected inputs - it’s able to answer pretty accurately if a small number is even or odd, but not if it’s a large number, which indicates it’s not reasoning but parroting answers to me
do you have example prompts where it showed clear logical reasoning?


huh, i kinda assumed it was a term made up/taken by journalists mostly, are there actual research papers on this using that term?


because it’s a text generation machine…? i mean, i wouldn’t say i can prove it, but i don’t think anyone can prove it’s capable of thinking, much less of reasoning
like, it can string together a coherent sentence thanks to well crafted equations, sure, but i wouldn’t qualify that as “thinking”, though i guess the definition of “thinking” is debatable


New response just dropped


for it to “hallucinate” things, it would have to believe in what it’s saying. ai is unable to think - so it cannot hallucinate
A/B testing moment
I don’t have a personal Microsoft account, and have no desire to create one more account, but am required by my organisation to use 1 Windows-only software for 2 hours every week. As such, I run that in a Windows VM on my computer, and this doesn’t seem like it’d be worth the effort of making a MS account