• bh11235@infosec.pub
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    11 months ago

    This is an issue that has plagued the machine learning field since long before this latest generative AI craze. Decision trees you can understand, SVMs and Naive Bayes too, but the moment you get into automatic feature extraction and RBF kernels and stuff like that, it becomes difficult to understand how the verdicts issued by the model relate to the real world. Having said that, I’m pretty sure GPTs are even more inscrutable and made the problem worse.

    • btaf45@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      This may be a dumb question, but why can’t you set the debugger on and step thru the program to see why it branches the way it does?

      • msage@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Because it doesn’t have branches, it has neurons - and A LOT of them.

        Each of them is tuned by the input data, which is a long and expensive process.

        At the end, you hope your model has noticed patterns and not doing stuff at random.

        But all you see is just weights on countless neurons.

        Not sure I’m describing it correctly though.

      • ViscloReader@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 months ago

        Because of the aforementioned automatic feature extraction. In this case, the algorithm chooses itself what feature is relevant when making decisions. The problem is that those features are almost impossible to decript since they are often list of numbers.

      • bh11235@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        I do exactly this kind of thing for my day job. In short: reading a syntactic description of an algorithm written in assembly language is not the equivalent of understanding what you’ve just read, which in turn is not the equivalent of having a concise and comprehensible logical representation of what you’ve just read, which in turn is not the equivalent of understanding the principles according to which the logical system thus described will behave when given various kinds of input.

      • logicbomb@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        People are able to explain themselves, and some AI also can, with similar poor results.

        I’m reminded of one of Azimov’s stories about a robot whose job was to aim an energy beam at a collector on Earth.

        Upon talking to the robot, they realized that it was less of a job to the robot and more of a religion.

        The inspector freaked out because this meant that the robot wasn’t performing to specs.

        Spoilers: Eventually they realized that the robot was doing the job either way, and they just let it do it for whatever reason.