• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle


  • Hey, all of those people knew in advance what will happen. I know people who are working for companies under sanctions, that were contributing to kernel before the war, and all of them knew that at some point they might be banned from that, and they still aren’t, their contributions just aren’t merged without review.
    The Russian sympathisers trying to spin it as their sudden act of russophobia out of the blue, but it’s absolutely everything but. When you work for Putin and his war, you shouldn’t be surprised that people don’t trust you implicitly.





  • Despite everything, Telegram is actually great. It’s only bloated if you’re using the features on the device, the client is opensource and native apps for any platforms, it’s very lightweight compared to other messangers and even to some dedicated solutions, it sends stuff p2p on the same network so you don’t need to care about the traffic, but also it allows for on-demand downloads so if you want the stuff will be available outside of your network.
    Alternatively, kdeconnect, but I find myself using Telegram instead 9 times out of 10, even though I have both installed.





  • Yeah, the scary thing about LLMs is that by their very nature they sound convincing and it’s very easy to fall into a trap, we as humans are hardwired to misconstrue the ability to talk smoothly for intelligence, and when computer started to speak with complete sentences and hold the immediate context of a conversation, we immediately started to think that we have a thinking machine and started believing it.
    The worst thing is, there are legit uses for all the machine learning stuff and LLMs in particular, so we can’t just throw it all out of the window, we will have to collectively adapt to this very convincing randomness machine that is just here all the time


  • As someone with degrees and decades of experience, I urge you not use it for that. It’s a cleverly disguised randomness machine, it will give you incorrect information that will be indistinguishable from truth because “truth” is never the criteria that it can use, but be convincing is. It will seed those untruths into you and unlearning bad practices that you picked up at the beginning might take years and cost you a career. And since you’re just starting, you have no idea how to pick up bullshit from truth as long as the final result seem to work, and that’s the works way to hide the bullshit from you.
    The field is already very accessible for everyone who wants to learn it, the amount of guides, examples, teaching courses, very useful youtube videos with thick Indian accent is already enormous, and most of them are at least trying to self-correct, while LLM actively doesn’t, in fact it’s trying to do the opposite.
    Best case scenario you’re learning inefficiently, worst case scenario you aren’t learning at all