Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)

  • nymwit@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    So just like shitty biased algorithms shouldn’t be making life changing decisions on folks’ employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn’t the only “life-or-death” choice that will be (is!) automated.

  • deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I also want to sell my shit for every purpose but take zero responsibility for consequences.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is the best summary I could come up with:


    ChatGPT is one of several generative AI systems that can create content in response to user prompts and which experts say could transform the global economy.

    But there are also dystopian fears that AI could destroy humanity or, at least, lead to widespread job losses.

    AI is a major focus of this year’s gathering in Davos, with multiple sessions exploring the impact of the technology on society, jobs and the broader economy.

    In a report Sunday, the International Monetary Fund predicted that AI will affect almost 40% of jobs around the world, “replacing some and complementing others,” but potentially worsening income inequality overall.

    Speaking on the same panel as Altman, moderated by CNN’s Fareed Zakaria, Salesforce CEO Marc Benioff said AI was not at a point of replacing human beings but rather augmenting them.

    As an example, Benioff cited a Gucci call center in Milan that saw revenue and productivity surge after workers started using Salesforce’s AI software in their interactions with customers.


    The original article contains 443 words, the summary contains 163 words. Saved 63%. I’m a bot and I’m open source!

    • ItsAFake@lemmus.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Mr Altman, who founded Open AI which built chat bot ChatGPT, says he hopes the initiative will help confirm if someone is a human or a robot.

      That last line kinda creeps me out.

      • LWD@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The whole thing is creepy. The name, the orb, scanning people’s eyes with it, specifically targeting poor Kenyan people (the “unbanked”) like a literal sci-fi villain.

        • ItsAFake@lemmus.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Yeah that’s most most sci-fi dystopian article I’ve read in a while.

          The line where one of the people waiting to get their eyes scanned is well eye opening " I don’t care what they do with the data, I just want the money", this is why they want us poor, so we need money so badly that we will impatiently hand over everything that makes us.

          But we already happily hand over our DNA genome to private corporations, so what’s an eye scan gonna do…

          • LWD@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            We hand over our DNA to ancestry companies for some obscene vanity reason and then pay them for the privilege of keeping it

    • hai@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Worldcoin, founded by US tech entrepreneur Sam Altman, offers free crypto tokens to people who agree to have their eyeballs scanned.

      What a perfect sentence to sum up 2023 with.

  • Quetzlcoatl@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    AI will be used to increase shareholder dividends. If your company just happens to involve healthcare, warfare, etc and AI makes decisions to maximize profit then youre just collateral damage with no human to blame. Sorry your husband died of cancer, the computer did it.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Yup, my job sent us to an AI/ML training program from a top cloud computing provider, and there were a few hospital execs there too.

      They were absolutely giddy about being able to use it to deny unprofitable medical care. It was disgusting.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yes on everything but drone strikes.

      A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.

    • halva@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      As advanced cruise control, yes. No, but in practice it doesn’t change a thing as humans can bomb civilians just fine themselves. Yes and yes.

      If we’re not talking about LLMs which is basically computer slop made up of books and sites pretending to be a brain, using a tool for statistical analysis to analyze a shitload of data (like optical, acoustic and mechanical data to assist driving or seismic data to forecast tsunamis) is a bit of a no-brainer.

  • Bipta@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    That’s why they just removed the military limitations in their terms of service I guess…

  • TimeSquirrel@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    We’ve been putting our lives in the hands of automated, programmed decisions for decades now if y’all haven’t noticed. The traffic light that keeps you from getting T-boned. The autopilot that keeps your plane straight and level and takes workload off the pilots. The scissor lift that prevents you from raising the platform if it’s too tilted. The airbag making a nanosecond-level decision on whether to deploy or not. And many more.

  • trackcharlie@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I mean he can have his opinion on this, I do personally agree, but it’s way too late to try and stop now.

    We’ve already got automated drones picking targets and killing people in the middle east and last I heard the newest set of US jets has AI integrated so heavily that they can opt to kill their operator in order to perform objectives

  • los_chill@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Agreed, but also one doomsday-prepping capitalist shouldn’t be making AI decisions. If only there was some kind of board that would provide safeguards that ensured AI was developed for the benefit of humanity rather than profit…