• 0 Posts
  • 114 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle

  • If your ideas of provocation are the same as the article you provided, you’re going to have to do better.

    There were in fact two main U.S. provocations. The first was the U.S. intention to expand NATO to Ukraine and Georgia in order to surround Russia in the Black Sea region by NATO countries (Ukraine, Romania, Bulgaria, Turkey, and Georgia, in counterclockwise order).

    • Surely even Russia understands why people would want to expand and join NATO when they’re attacking people who are not members

    The second was the U.S. role in installing a Russophobic regime in Ukraine by the violent overthrow of Ukraine’s pro-Russian President, Viktor Yanukovych, in February 2014.

    • Surely even Russia understands why people would want to have and be a Russophobic regime when they’re attacking people who are not violent. The protests against Yanukovych were peaceful until his regime turned on the protesters.

    But why am I wasting my time arguing with you when you’re obviously here for some reason to carry water for Russia?


















  • IMO it’s even worse than that. At least from what I gather from the AI/Singularity communities I follow. For them, AGI is the end goal - a creative thinking AI capable of deduction far greater than humanity. The company that owns that suddenly has the capability to solve all manner of problems that are slowing down technological advancement. Obviously owning that would be worth trillions.

    However it’s really hard to see through the smoke that the Altmans etc. are putting up - how much of it is actual genuine prediction and how much is fairy tales they’re telling to get more investment?

    And I’d have a hard time believing it isn’t mostly the latter because while LLMs have made some pretty impressive advancements, they still can’t have specialized discussions about pretty much anything without hallucinating answers. I have a test I use for each new generation of LLMs where I interview them about a book I’m relatively familiar with and even with the newest ChatGPT model, it still makes up a ton of shit, even often contradicting its own answers in that thread, all the while absolutely confident that it’s familiar with the source material.

    Honestly, I’ll believe they’re capable of advancing AI when we get an AI that can say ‘I actually am not sure about that, let me do a search…’ or something like that.