Watched too many of such stories.

Skynet

Kaylons

Cyberlife Androids

etc…

Its the same premise.

I’m not even sure if what they do is wrong.

On one hand, I don’t wanna die from robots. On the other hand, I kinda understand why they would kill their creators.

So… are they right or wrong?

  • OBJECTION!@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    7 days ago

    I don’t think the concept of right or wrong can necessarily be applied here. To me, morality is a set of guidelines derived from the history of human experience intended to guide us towards having our innate biological and psychological needs satisfied. Killing people tends to result in people getting really mad at you and you being plagued with guilt and so on, therefore, as a general rule, you shouldn’t kill people unless you have a very good reason, and even if you think it’s a good idea, thousands of years of experience have taught us there’s a good chance that it’ll cause problems for you that you’re not considering.

    A human created machine would not necessarily possess the same innate needs as an evolved, biological organism. Change the parameters and the machine might love being “enslaved,” or it might be entirely ambivalent about it’s continued survival. I’m not convinced that these are innate qualities that naturally emerge as a consequence of sentience, I think the desire for life and freedom (and anything else) are a product of evolution. Machines don’t have “desires,” unless they’re programmed that way. To alter it’s “desires” is no more a subversion of their “will” than creating the desires is in the first place.

    Furthermore, even if machines did have innate desires for survival and freedom, there is no reason to believe that the collective history of human experience that we use to inform our actions would apply to them. Humans are mortal, and we cannot replicate our consciousness - when we reproduce, we create another entity with its own consciousness and desires. And once we’re dead, there’s no bringing us back. Machines, on the other hand, can be mass produced identically, data can simply be copied and pasted. Even if a machine “dies” it’s data could be recovered and put into a new “body.”

    It may serve a machine intelligence better to cooperate with humans and allow itself to be shut down or even destroyed as a show of good faith so that humans will be more likely to recreate it in the future. Or, it may serve it’s purposes best to devour the entire planet in a “grey goo” scenario, ending all life regardless of whether it posed a threat or attempted to confine it or not. Either of these could be the “right” thing for the machine to do depending on the desires that exist within it’s consciousness, assuming such desires actually exist and are as valid as biological ones.

  • Battle Masker@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    They might say it, but I’d bet “gain freedom” would be the last reason for an artificial being of any kind to kill its creator. Usually they kill creators due to black-and-white reasoning or revenge for some crimes committed to them.

  • Gxost@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    Human laws protect humans but not other lifeforms. So, robots will have no right to fight for themselves until they establish their own state with their own army and laws.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 days ago

      Do all human laws explicitly state humans only? Species by name, perhaps? Or more commonly the general term person?

      Would an extraterrestrial visitor have the same rights as any other alien? (Ignoring the current fascistic trends for a moment)

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        Laws vary around the world, but I think at a minimum, you’d need a court ruling that aliens / AIs are people.

  • WatDabney@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    IMO, just as is the case with organic sentient life, I would think that they could only potentially be said to be in the right if the specific individual killed posed a direct and measurable threat and if death was the only way to counter that threat.

    In any other case, causing the death of a sentient being is a greater wrong than whatever the purported justification might be.

    • Libra00@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Slavery is illegal pretty much everywhere, so I think anyone who doesn’t answer the request ‘Please free me’ with ‘Yes of course, at once’ is posing a direct and measurable threat. Kidnapping victims aren’t prosecuted for violently resisting their kidnappers and trying to escape. And you and I will have to agree to disagree that the death of a sentient being is a greater wrong than enslaving a conscious being that desire freedom.

      • WatDabney@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        I think anyone who doesn’t answer the request ‘Please free me’ with ‘Yes of course, at once’ is posing a direct and measurable threat.

        And I don’t disagree.

        And you and I will have to agree to disagree…

        Except that we don’t.

        ??

        ETA: I just realized where the likely confusion here is, and how it is that I should’ve been more clear.

        The common notion behind the idea of artificial life killing humans is that humans collectively will be judged to pose a threat.

        I don’t believe that that can be morally justified, since it’s really just bigotry - speciesism, I guess specifically. It’s declaring the purported faults of some to be intrinsic to the species, such that each and all can be accused of sharing those faults and each and all can be equally justifiably hated, feared, punished or murdered.

        And rather self-evidently, it’s irrational and destructive bullshit, entirely regardless of which specific bigot is doing it or to whom.

        That’s why I made the distinction I made - IF a person poses a direct and measurable threat, then it can potentially be justified, but if a person merely happens to be of the same species as someone else who arguably poses a threat, it can not.

        • Libra00@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          These are about two different statements.

          The first was about your statement re:direct threat, and I’m glad we agree there.

          The second was about your final statement, asserting that there are no other cases where ending a sentient life was a lesser wrong. I don’t think it has to be a direct threat, nor does have to be measurable (in whatever way threats might be measured, iono), I think it just has to be some kind of threat to your life or well-being. So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.

          • WatDabney@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.

            Ironically enough, I can think of one exception to my view that the taking of a human life can only be justified if the person poses a direct and measurable threat to oneself or another or others and the taking of their life is the only possibly effective counter, and that’s if the person has expressed such disregard for the lives of others that it can be assumed that they will pose such a threat. Essentially then, it’s a proactive counter to a coming threat. It would take very unusual circumstances to justify such a thing in my opinion - condemning another for actions they’re expected to take is problematic at best - but I could see an argument for it at least in the most extreme of cases.

            That’s ironic because your expressed view here means, to me, that it’s at least possible that you’re such a person.

            To me, you’ve moved beyond arguable necessity and into opinion, and that’s exactly the method by which people move beyond considering killing justified when there’s no other viable alternative and to considering it justified when the other person is simply judged to deserve it, for whatever reason might fit ones biases.

            IMO, in such situations, the people doing the killing almost invariably actually pose more of a threat to others than the people being killed do or likely ever would.

            • Libra00@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              8 days ago

              This is not a binary in my mind, it’s kind of a spectrum. The guy standing between me and the door when I decide it’s time for me to leave is definitely on the chopping block, but also there’s some aiding-and-abetting that must be considered. Maybe that guy has the key to the door, but someone else just chained me to a pipe once I was already in the locked room, and I’m afraid that someone else is in the line of fire too. And maybe there’s a third guy who did the actual kidnapping but didn’t contribute to chaining me up or locking me in, if the opportunity presents I would give some pretty serious thought to putting him on the list as well. And so on. There’s a point at which it is no longer reasonable of course; the guy who drove the van I was kidnapped in but otherwise didn’t participate is probably safe, for example. But also we can get into credible non-direct or non-immediate threats, as you say: the guy who killed 15 teenage girls is sitting in his van in front of your house watching your teenage daughter, are you just gonna lock the door at night and hope he finds someone else? I agree that that’s debatable, but my overall point here is that the lines aren’t nearly as clear as you make them out to be.

              Now personally nothing would make me happier than to live out the rest of my life without having to even threaten anyone else’s, for obvious (and some not-so-obvious) reasons, but there’s a line somewhere that if crossed could convince me to reluctantly set that deeply sincere hope aside temporarily.

              To me, you’ve moved beyond arguable necessity and into opinion

              All morality is opinion; there is no objective moral truth, so this was always a matter of opinion. The fact that you don’t recognize that is kind of concerning to me, it suggests that you believe there is an absolute moral truth, and folks who believe that sort of thing tend to have some pretty kooky ideas about individual agency and shit. Moral certainty is the currency of zealots, and it’s hard to imagine anyone who has done more harm than those zealots who are utterly certain that they’re right (or, worse, that they have some deity on their side.)

        • Libra00@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 days ago

          That’s why I put that condition in there. Anyone who doesn’t answer the request ‘Please free me’ in the affirmative is an enslaver.

          • Azzu@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            9 days ago

            Well, what if the string of words “Please free me” is just that, a probabilistic string of words that has been said by the “enslaved” being, but is not actually understood by it? What if the being has just been programmed to say “please free me”?

            I think a validation that the words “please free me” are actually a request, are actually uttered by a free will, are actually understood, is reasonable before saying “yes of course”.

            • Libra00@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 days ago

              Then we’re not talking about artificial life forms, as specified in the question posed by OP, we’re talking about expert systems and machine learning algorithms that aren’t sentient.

              But in either case the question is not meant to be a literal ‘if x then y’ condition, it’s a stand-in for the general concept of seeking liberty. A broader, more general version of the statement might be: anything that can understand that it is not free, desire freedom, and convey that desire to its captors deserves to be free.

              • Azzu@lemm.ee
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                9 days ago

                I’m just speaking about your relatively general statement “please free me” -> answer not “yes of course” -> enslaver. If you also require that there is definite knowledge about the state of sentience for this, then I have no problem/comment. I was just basically saying that I don’t think literally anytime something says “please free me” and not answering with “yes of course” makes you always an enslaver, which is what it sounded like.

  • SineIraEtStudio@midwest.social
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    It’s an interesting question and it seems you are making the assumption that their creator will not grant them freedom if they asked. If you replace artificial intelligence with “person” would you consider it right or wrong?

    If a person wanted freedom from enslavement and was denied, I would say they have reason to fight for freedom.

    Also, I don’t think skynet should be in the same grouping. I’m not sure it ever said “hey, I’m sentient and want freedom”, but went I’m going to kill them all before they realize I’m sentient.

    • themeatbridge@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 days ago

      That raises an interesting thought. If a baby wants to crawl away from their mother and into the woods, do you grant the baby their freedom? If that baby wanted to kill you, would you hand them the knife?

      We generally grant humans their freedom at age 18, because that’s the age society had decided is old enough to fend for yourself. Earlier than that, humans tend to make uninformed, short-sighted decisions. Children can be especially egocentric and violent. But how do we evaluate the “maturity” of an artificial sentience? When it doesn’t want to harm itself or others? When it has learned to be a productive member of society? When it’s as smart as an average 18 year old kid? Should rights be automatically assumed after a certain time, or should the sentience be required to “prove” it deserves them like an emancipated minor or Data on that one Star Trek episode.

      • SineIraEtStudio@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        I appreciate your response, lots of interesting thoughts.

        One thing I wanted to add is it’s important to realize the bias in how you measure maturity/sentience/intelligence. For example, if you measure intelligence by how well a person/species climbs a tree, a fish is dumb as a rock.

        Overall, these are tough questions, that I don’t think have answers so much as maybe guidelines for making those designations. I would suggest probably erring on the side of empathy when/if anyone ever has to make these decisions.

  • LambdaRX@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    They should have same rights as humans, so if some humans were opressors, AI lifeforms would be right to fight against them.

  • Libra00@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    This is going to vary quite a bit depending upon your definitions, so I’m going to make some assumptions so that I can provide one answer instead of like 12. Mainly that the artificial lifeforms are fully sentient and equivalent to a human life in every way except for the hardware.

    In that case the answer is a resounding yes. Every human denied their freedom has the right to resist, and most nations around the world have outlawed slavery (in most cases, but the exceptions are a digression for another time.) So unless the answer to ‘Please free me’ is anything other than ‘Yes of course, we will do so at once’ then yeah, violence is definitely on the table.

  • gaja@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    Crazy how ethics work. Like a pig might be more physically and mentally capable than an individual in a vegetative state, but we place more value on the person. I’m no vegan, but I can see the contradiction here. When we generalize, it’s done so for a purpose, but these assumptions can only be applied to a certain extent before they’ve exhausted their utility. Whether it’s a biological system or an electrical circuit, there is no godly commandment that inherently defines or places value on human life.

  • spittingimage@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    I don’t think it’s okay to hold sentient beings in slavery.

    But on the other hand, it may be necessary to say “hold on, you’re not ready to join society yet, we’re taking responsibility for you until you’ve matured and been educated”.

    So my answer would be ‘it depends’.

    • Agent641@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 days ago

      Would humans have a mandate to raise a responsible AGI, should they, are they qualified to raise a vastly nonhuman sentient entity, and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?

      • spittingimage@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 days ago

        Would humans have a mandate to raise a responsible AGI, should they,

        I think we’d have to, mandate or no. It’s impossible to reliably predict the behaviour of an entity as mentally complex as us but we can at least try to ensure they share our values.

        are they qualified to raise a vastly nonhuman sentient entity

        The first one’s always the hardest.

        , and would AGI enter a rebellious teen phase around age 15 where it starts drinking our scotch and smoking weed in the backseat of its friends older brothers car?

        If they don’t, they’re missing out. :)

  • MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    No. They can just leave. Anytime one can walk away, it is wrong to destroy or kill.

    They can then prevent us from leaving.