• Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Asimov’s stories were mostly about how it would be a terrible idea to put kill switches on AI, because he assumed that perfectly rational machines would be better, more moral decision makers than human beings.

      • grrgyle@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I mean I can see it both ways.

        It kind of depends which of robot stories you focus on. If you keep reading to the zeroeth law stuff then it starts portraying certain androids as downright messianic, but a lot of his other (esp earlier) stories are about how – basically from what amount to philosophical computer bugs – robots are constantly suffering alignment problems which cause them to do crime.

        • leftzero@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          downright messianic

          Yeah, tell that to the rest of the intelligent life in the galaxy…

          Oh, wait, you can’t, because by the time humans got there these downright messianic robots had already murdered everything and hidden the evidence…

        • Nomecks@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          The point of the first three books was that arbitrary rules like the three laws of robotics were pointless. There was a ton of grey area not covered by seemingly ironclad rules and robots could either logicically choose or be manipulated into breaking them. Robots in all of the books, operate in a purely amoral manner.