I don’t understand this idea completely myself but it’s an evolved form of technocracy with autonomous systems, suggest me some articles to read up on because in the field of politics I am quite illiterate. So it goes like this:

  • Multiple impenetrable, isolated AI expert systems that make rule based decisions (unlike black boxes, eg. LLMs).
  • All contribute to a notion and the decision will be picked much like in a distributed system, for fairness and equality.
  • Then humans are involved, but they too are educated, elected individuals and some clauses that stop them from gaming the system and corrupting it.
  • These human representatives can either pick from list of decisions from AI systems or support the already given notion or drop it altogether. They can suggest notions and let AI render it but humans can’t create notions directly.

Benefits:

  • Generally speaking, due to the way the system will be programmed, it won’t dominate or supress and most of the actions will be justified with a logic that puts human lives first and humans profit second.
  • No wars will break out since, it’s not human greed that’s holding the power
  • Defence against non-{systemized} states would be taken care by military and similar AI expert systems but the AI will never plan to expand or compromise a life of a human for offense

Cons:

  • Security vulnerabilities can target the system and take down the government’s corner piece
  • No direct representation of humans, only representation via votes on notions and suggestions to AI
  • Might end up in AI Apocalypse situation or something I dont know

The thoughts are still new to me, so I typed them out before thinking on paper. Hence, I am taking suggestions for this system!

tl;dr is let AI rule us, because hard coded-rule based decision maker is better than a group of humans whose intents can always be masked and unclear.

  • Rhynoplaz@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    I imagine that an AI run government would create “optimal” laws. Which sounds alright, but, optimal for what?

    Economic growth? Citizen happiness? Efficiency? If more than one, how are they weighted? Is happiness more or less important than GNP?

    AI is going to follow the guidelines that we set up for it and follow them without any nuance. I suspect asking it how to reduce the number of homeless people would probably result in suggesting execution of the homeless, and once we all decide that AI knows best, we will not question its cold and efficient solutions.

      • Archer@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        7 days ago

        The problem here isn’t a technical solution. You think you can develop a technical solution then optimize it, and you’re probably an engineer under 25.

        You can’t. The problem isn’t technical. It’s that the problems are so important you need good people on them. Good people know these answers can change. Good people know how and when to bend the rules or change the system when needed.

        Finding good people is always harder than a technical problem

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 days ago

    If society is solid and supports evidenced based decision making the AI systems don’t need to be an explicit part of it because the humans will be coming to the same conclusions. If there is a significant portion of malicious humans like we have now, they will find ways to influence the outcome no matter how many barriers are in the way.

    The drawback is that humans are involved, and now have a handy AI to blame for anything they want to do. That includes going to war, because they will figure out a way to make that an outcome either by breaking or faking the process.

    • bluecat_OwO@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      absolutely agreed, the society wont let this system establish but let us assume it get’s established, my reasoning was, the chain of command would be simpler to see and everything would be transparent!

  • ChaoticNeutralCzech@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    AI has little actual understanding of the real world, and the law needs to be written to address externalities and possible loopholes.

    An AI good enough simply does not exist yet, and the companies that say they could bring it about have awful ethics track record (overpromising, energy use, disregard of copyright and cultural impact from the model’s abuse).

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 days ago

    AI expert systems that make rule based decisions (unlike black boxes, eg. LLMs).

    Here the proposal focuses on the “how” (decision tree over transformer models (*)). But more important to me is the “what”: during training of these models, what will be their error function, what’s their goal?

    With LLMs today the goal is simple: be the best possible parrot. With these proposed models, what will be the goal?

    The proposal does remind me of the EU commission vs parliament: the EU commission, who are unelected bureaucrats, decide what goes up for vote. EU parliament, who are elected, then votes.

    Parliament can’t even decide to undo existing laws, that proposal too has to come from the bureaucrats. I think it’s one of the most undemocratic institutes that still calls itself a democracy.

    (*) Tranformers aren’t truly “black box”, their interpretation is just harder to explain to humans without thorough understanding of algebra. But that’s secondary to the “what” issue, to me.

    • bluecat_OwO@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      So first of all, What:

      • Simple predicate logic
      • who decides the logic? open standards anyone who has a point can draft a point or share their idea
      • I know it will be hard, time consuming and very tedious, but it’s just an Idea, not like those in power would ever relinquish it so easily.

      secondly, I have read about EU laws and it is very challenging but unlike that case, system implemented here would be very fluid, law’s validity period is determined by 2 things:

      1. People’s choices
      2. it’s relevance as decided by the system during polling Which would still not make this system very democratic in a sense but it would really just be distributed AI autocracy, at least the way I see it, its similar to local AI overlords ruling us.

      I think there can be many more standards that can be implemented to keep the system bias proof and strongly ethical.

      One tangential question I had was:

      • If LLM’s logic is structural and derivable, even though complex then why aren’t algorithms and models tend towards rule based predictions like black scholes model and mdp, or am I just a newbie and hasn’t seen enough and I am out of my depth again
      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Your answer to “What”, predicate logic and the tangential question are very strongly related.

        As a practical answer: both types (rule based vs deep learning) exists, in practice the latter performs way better.

        Philosophically, I think it’s a very good question too, to which I can only guess.

        There’s this saying that physics describes everything. From the smallest particle-wave interactions, to the movement of galaxies. It’s just everything inbetween that it struggles with.

        My guess: one can hope the world is best modelled as a clever differential equation. It might as well be. But the differential equation needs boundary conditions, and they’re very large. Spending a lot of effort on measuring, memorizing these conditions, and then doing simple first order extrapolation, is more effective than trying to find the equation.

        • bluecat_OwO@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          I understand and I think your last paragraph is very poetic! And I agree with you partially, but I think in certain cases, it’s better to find the one general case the solution fits to and add the edge cases as it grows.

          But putting the question of model selection aside, do you think this system would be practical, theoretically of course?

          • iii@mander.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 days ago

            I like it as a sci-fi: the AI gods on the hill speak through messengers elect. It’s a greek gods and oracles situation.

            However I must agree with what others said: humans will manipulate whom- and whatever to enforce their desires.

            So the only way to make sure the machine can survive against that, is for them to be able to do the same. Problem being, they might be better at it.

  • partial_accumen@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    Multiple impenetrable, isolated AI expert systems that make rule based decisions (unlike black boxes, eg. LLMs).

    Two problems. First, AI give answers built from models fed by training data. Where are you going to get this training data? How will you ensure that the data itself doesn’t have bias baked into it? This has been a problem already because the data we’ve fed into AI models for training reflects our human (many times misogynistic and racist) notions. As an example: Lets say your AI model is designed to select the best person for President of the United States. The training data we’d have is: all past US presidents

    As we have yet to elect a woman President, one obvious criteria the model will incorporate is: “candidate must be male”

    Obviously we know that’s not the case, but all of our past behavior has shown this is a requirement for Presidency and that is all the model knows.

    A second problem even after the model is built is introduced bias. As in, the operator of the model, especially in non-black-box AI is to change the weights of what factors are considered more or less important in the final answer. Show me who, in your AI technocracy, will control the introduced bias, and I’ll show you who is actually making the desicions.

    contribute to a notion and the decision will be picked much like in a distributed system, for fairness and equality.

    Who is determining what is “fair” and “equal” is? We have seen there are certain groups that are trying to push the stupid notion that white people should be in charge. If these morons are included in the group that decides what "fair or “equal” is, then the resulting AI answers will just as racist.

    Then humans are involved, but they too are educated, elected individuals and some clauses that stop them from gaming the system and corrupting it.*

    This bolded part is the absolute hardest part, and your whole description is kind of handwaving it away. The whole of humanity has been looking for a system of leadership that is incorruptible. We haven’t found one yet, and your clauses here would be the magic to any system irrespective of AI or not involved.

    I appreciate you seeing a problem and trying to propose a solution. Don’t let me stop you, but incorporate my feedback and others into your thoughts and see where it takes you. I’d love for you to find something we could use.