• rottingleaf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Sure, very sophisticated LLM’s might get it right some of the time, or even a lot of the time in the cases of very specific topics with very good training data. But its accuracy cannot be guaranteed unless you fact-check 100% of its output.

    You will only guarantee what you answer for.

    Since they have power to make it so, they own the good part and disown the bad part.

    It’s the warfare logic, the collateral damage of FAB-1500 is high, but it makes even imps in the hell tremble when dropped.

    And to be treated more gently you need a different power balance. Either make them answer to you, or cut them out. You can’t cut out a bombardment, though, and with the TRON project in Japan MS specifically have already shown that they are willing and able to use lobbying to force themselves onto you.

    Of course, the end goal of these schemes is to be able to fire as much of the human staff as possible, so it ultimately winds up that there is nobody left to actually do the review. And whatever emaciated remains of management are left don’t actually understand how the machine works nor how its output is generated.

    Reminiscent of the Soviet “they imitate pay, we imitate work” thing. Or medieval kings with reducing the metal percentages in coins. The modern Web is not very transparent, and the income is ad-driven, so it’s not immediately visible how generated bullshit isn’t worth nearly as much as something written by a human.

    What I’m trying to say is that the way it’s interconnected and amortized this Web is going down as a whole, and not just people poisoning it for short-term income.

    This is intentional, they don’t want to go down alone, and when more insularity exists, such people go down and others don’t. Thus they’ve almost managed to kill that insularity. This will still work the same good old evolutionary way, just slower.