I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something here? Are there any genuine benefits?

  • CrayonRosary@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Don’t limit your thoughts to just generative AI, which is what you are talking about. Chat bot and media generation aren’t the only uses for AI (by which I mean any trained neural network program that can do some sort of task.

    Motor skills

    AI can solve learn to solve the marble maze “Labyrinth” much, much faster than a human, and then speedrun it faster than any human ever has. Six hours. That’s how long it took a brand new baby AI to beat the human world record. A human that has been learning hand-eye coordination and fine motor control all of it’s life, with a brain which evolved over millions of years to do exactly that.

    No special code needed. The AI didn’t need to be told how balls roll or knobs turn, or how walls block the ball. It earned all of that on the fly. The only special code it had was optical and mechanical. It knew it had “hands” in the form of two motors, and it knew how to use them. It also had eyes (a camera), and access to a neural network computer vision system. When the AI started taking illegal shortcuts, and they had to instruct it to follow the prescribed path, which is printed on the maze.

    Robots could in factories, mines, and other dangerous, dehumanizing jobs. Why do we want workers to behave like robots in a factory just to get paid? Replace them with actual robots and let them perform a human job like customer service.

    Think of a robot that has actual hands and arms, feet and legs, and various “muscles”. We have it learn it’s motor control using a very accurate physics system on a computer that simulates its body. This allows the AI to learn at much faster speeds than by controlling a real robot. We can simulate thousands of robots in parallel and run the simulations much faster than real time. Train it to learn how to use it’s limbs and eyes to climb over obstacles, open doors and detain or kill people. We could replace police with them. Super agile robot cops with no racial bias or other prejudices. Arresting people and recording their crimes. Genuine benefit.

    Computer Vision

    AI can be trained to recognize objects, abstract shapes, people’s individual faces, emotions, people’s individual body shape, mannerisms, and gait. There are many genuine benefits to such systems. We can monitor every public location with cameras and an AI employing these tools. This would help you find lost loved ones, keep track of your kids as they navigate the city, and track criminal activity.

    By recording all of this data, tagged with individual names, we can spontaneously view the public history of any person in the world for law enforcement purposes. Imagine we identify a person as a threat to public safety 10 years from now. We’d have 10 years of data showing everyone they’ve ever associated with and where they went. Then we could weed out entire networks of crime at once by finding patterns among the people they’ve associated with.

    AI can even predict near future crime from an individual’s recent location history, employment history, etc. Imagine a person is fired from his job then visits a gun store then his previous place of employment. Pretty obvious what’s going on, right? But what if it happens over the period of two weeks? Difficult for a human to detect a pattern like this in all the noise of millions of people doing their everyday tasks, but easy for an AI. Genuine benefit.

    Managing Production

    With enough data and processing power, we can manage the entire economy without the need for capitalism. People’s needs could be calculated by an AI and production can be planned years ahead of time to optimize inputs and outputs. The economy–as it stands today–is a distributed network of human brains and various computers. AI can eliminate the need for the humans, which is good because humans are greedy and neurotic. AI can do the same job without either. Again, human’s are freed to pursue human endeavors instead of worrying about making sure each farm and factory has the resources it needs to feed and clothe everyone. Genuine benefit.

    Togetherness

    We will all be part of the same machine working in harmony instead of fighting over how to allocate resources. Genuine benefit!

    • Mnemnosyne@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Don’t discount the generative AI either!

      Language generating AI like LLMs: Though we’re in early stages yet and they don’t really work for communication, these are going to be the foundation on which AI learns to talk and communicate information to people. Right now they just spit out correct-sounding responses, but eventually the trick to using that language generation to actually communicate will be resolved.

      Image/video/music generating AI: How difficult it is right now, for the average person to illustrate an idea or visual concept they have! But already these image generating AI are making such illustration available to the common person. As they advance further and adjusting their output based on natural conversational language becomes more effective, this will only get better. A picture paints a thousand words…and now the inverse will also be true, as anyone will be able to create a picture with sufficient description. And the same applies to video and music.

      That said I love your managing production point. It’s something I e been thinking too - centrally planned economies have always had serious issues, but if with predictive AI we can overcome the problems by accurately predicting future need, the problems with them may be solvable, and we can then take advantage of the inherent efficiency in such a planned system.

      • CrayonRosary@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        That’s funny because the whole post was sarcastically outlining a distopian nightmare.

        If that kind of stuff was actually to become real, some dictator would take control of it and subjugate the entire country, or world… forever. There’d be no way to resist that level of surveillance or machine policing.

        • Jojo@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          I was with it until it said let’s train AI robots to kill people, and then use them to track every face on the planet and use that data to “identify threats”…

          OP wants a robot overseer, but also wants it to be a police state.

    • Flumpkin@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Nice post. A while back I read something on reddit about a theory for technological advances always being used for the worst possible nightmare scenario. But I can’t find it now. Fundamentally I’m a technological optimist but I can’t even yet fully imagine the subtle systemic issues this will cause. Except the rather obvious one:

      Algorithms on social media decide what people see and that shapes their perception of the world. It is relatively easy to manipulate in subtle ways. If AI can learn to categorize the psychology of users and then “blindly anticipate” what and how they will respond to stimuli (memes / news / framing) then that will in theory allow for a total control by means of psychological manipulation. I’m not sure how close we are to this, the counter argument would be that AI or LLMs currently don’t understand at all what is going on, but public relations / advertising / propaganda works on emotional levels and doesn’t “have to make sense”. The emotional logic is much easier to categorize and generate. So even if there is no blatant evil master plan just optimizing for max profit, max engagement, could make the AI pursue (dumbly) a broad strategy that is more evil than that.

      • GiveMemes@jlai.lu
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Another great one is science! Machine learning is used for physics, bio, and chem models, in things such as genetic sequencing and generation of new drugs as well as very useful in figuring out protein folding. It’s very useful in all of the iterative “grunt work” so to speak. While it may not be the best at finding effective new drugs, it can certainly arrange molecules according to the general rules of organic chemistry much faster than any human, and because of that has already led to several drug breakthroughs. AI is hugely useful! LLMs are mostly hype

    • Jojo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Train it to learn how to use it’s limbs and eyes to climb over obstacles, open doors and detain or kill people. We could replace police with them. Super agile robot cops with no racial bias or other prejudices. Arresting people and recording their crimes. Genuine benefit.

      I got as far as ai cops and became sceptical. Like, yeah, sure, but what you’re describing isn’t just a robot being controlled by an AI, it’s also the ai making decisions and choosing who to pursue and such, which is a known weakness right now.

      And then you let them kill people.

  • Flumpkin@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Have you tried out writing prompts for an image generating AI? If you have some idea and play around with it it’s quite a new thing. And extension of human imagination. YMMV

    AI is helping us to correctly predict protein folding which will enable new medication. Afaik it’s a major breakthough that could allow alleviating a lot of suffering.

  • sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    Someone I know recently published in Nature Communications an enormous study where they used machine learning to pattern match peptides that are clinically significant/bioactive (don’t forget, the vast amount of peptides are currently believed to be degradation products).

    Using mass spectrometry, they effectively shoot a sawed off shotgun at a wall then using machine learning to detect pellets that may have interesting effects. This opens up for new understanding in the role peptides play the translational game as well as a potential for a huge amount of new treatments for a vast swathe of diseases.

    • Fried_out_Kombi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Sounds similar to some of the research my sister has done in her PhD so far. As I understand, she had a bunch of snapshots of proteins from a cryo electron microscope, but these snapshots are 2D. She used ML to construct 3D shapes of different types of proteins. And finding the shape of a protein is important because the shape defines the function. It’s crazy stuff that would be ludicrously difficult and time-consuming to try to do manually.

  • Meron35@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Maybe you only do an “odd bit” of mundane writing and the image/music generation is a gimmick, but a lot of the modern world is mundane and pays people lots of money for mundane work. E.g. think of those internal corporate videos which require a script, stock photography and footage, basic corporate music following a 4 chord progression, a voiceover, all edited into a video.

    Steve Taylor is most famous for being the voiceover for Kurzgesagt videos, but more generally he’s a voiceover artist that features in lot of these boring corporate videos. This type of content has such high demand there is an entire industry dedicated towards it, which seems well suited to AI.

    https://youtu.be/vDb2h1-7LA0

    This does raise further ethical/economical issues though, as most people in these creative industries actually require income from this boring work to get by.

    • Xtallll@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      This does raise further ethical/economical issues though, as most people in these creative industries actually require income from this boring work to get by.

      That sounds more like a problem with capitalism than AI.

  • blackstampede@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    My partner and I have founded a company that uses custom AI models trained on research to (partially) automate the process of peer review and replication. We can identify mistakes and some types of fraud in research to aid reviewers as well as extract methods and equations from papers and automatically verify findings. If you know anything about the state of research right now, those are some incredibly large benefits.

  • sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Once the technology has embedded, our societal adjustments have completed (and they will be PAINFUL) and assuming the profit of AI is sufficiently taxed for the wealth to redistribute, AI will be seen as the Industrial Revolution x10.

    Most likely however, the rich will get richer.

    • kalkulat@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      assuming the profit of AI is sufficiently taxed for the wealth to redistribute

      AH - hah-hah-hah-hah !!!

      Oh well, at least some of us will still be good for cleaning up messes and other physical things. And remember, like they used to say, hard work never killed anybody.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    Depends on what kind of AI. In gaming, AI is part of the process to entertain and challenge the player, and has even been used to help model life systems.

    I have yet to see how useful LLMs can be outside of being blatant plagarists but for a time, projects like AI Dungeon really did push the emphasis on “interactive dynamic narratives” and it was really fun for a while.

    ML has been an important part in fraud detection for at least a decade now.

  • Pyr_Pressure@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Lots of it’s actual AI. Nothing we have at the moment I would actually qualify as true AI. It’s just algorithms spitting out and answers what it interprets your question as. They don’t think or create anything, just regurgitate things in predefined patterns.

  • funkless_eck@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    something I’m not seeing here is business applications in supply chain. Managing forward-stocking warehouses, monitoring shipping lanes and ordering for seasonality, as well as identifying anomalies such as chargebacks, stock outs, outlier returns/damages/failures is typically managed by a handful of people mixing spreadsheets, ERP databases, and emailing people to tell them “your light bulbs are stuck in the suez canal and your recent batch of cables have a defect”

    AI can replace these systems with ML, and use LLMs to generate the notifications.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    An interesting point that I saw about a trail on one of the small, London Tube stations:

    • most of the features involved a human who could come and assist or review the footage. The AI being able to flag wheelchair users was good because the station doesn’t have wheelchair access with assistance.

    • when they tried to make a heuristic for automatically flagging aggressing people, they found that people with the arms up tend to be aggressive. This flagging system led to the unexpected feature that if a Transport For London (TFL) staff member needed assistance (i.e. if medical assistance was necessary, or if someone was being aggressive towards them, the TFL staff member could put their arms up to bring the attention onto them.

    That last one especially seems neat. It seems like the kind of use case where AI has the most power when it’s used as a tool to augment human systems, rather than taking humans out of stuff.

  • SorteKanin@feddit.dk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    Much like automated machinery, it could in theory free the workers to do more important, valuable work and leave the menial stuff for the machine/AI. In theory this should make everyone richer as the companies can produce stuff cheaper and so more of the profits can go to worker salaries.

    Unfortunately what happens is that the extra productivity doesn’t go to the workers, but just let’s the owners of the companies take more of the money with fewer expenses. Usually rather firing the human worker rather than giving them a more useful position.

    So yea I’m not sure myself tbh

    • TheMurphy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      This already happened with the industrial revolution. It did make the rich awfully rich, but let’s be honest. People are way better off today too.

      It’s not perfect, but it does help in the long run. Also, there’s a big difference in which country you’re in.

      Capitalist-socialism will be way better off than hard core capitalism, because the mind set and systems are already in place to let it benefit the people more.

      • deafboy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Yes, that way the government will be able to make sure it benefits the right people. And we will call it the national socialism… wait… no!

    • SinningStromgald@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      No no you found the actual “use” for AI as far as businesses go. They don’t care about the human cost of adopting AI and firing large swaths of workers just profits.

      Which is why governments should be quickly moving to highly regulate AI and it’s uses. But governments are slow plodding things full of old people who get confused with toasters.

      As always capitalism kills.

    • doctorcrimson@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      The question wasn’t "In Theory, are there any genuine benefits" it was if there are currently right now.

  • gorysubparbagel@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Most email spam detection and antimalware use ML. There are use cases in medicine with trying to predict whether someone has a condition early

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      It’s also being used in drug R&D to find similar compounds like antimicrobial activity, afaik.

  • Rooki@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    AI has some interesting use cases, but should not be trusted 100%.

    Like github copilot ( or any “code copilot”):

    • Good for repeating stuff but with minor changes
    • Can help with common easy coding errors
    • Code quality can take a big hit
    • For coding beginners, it can lead to a deficit of real understanding of your code
      ( and because of that could lead to bugs, security backdoors… )

    Like translations ( code or language ):

    • Good translation of the common/big languages ( english, german…)
    • Can extend a brief summary to a big wall of text ( and back )
    • If wrong translated it can lead to that someone else understands it wrong and it misses the point
    • It removes the “human” part. It can be most of the time depending on the context easily identified.

    Like classification of text/Images for moderation:

    • Help for identify bad faith text / images
    • False Positives can be annoying to deal with.

    But dont do anything that is IMPORTANT with AI, only use it for fun or if you know if the code/text the AI wrote is correct!

    • fiddlestix@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Actually the summaries are good, but you have to know some of it anyway and then check to see if it’s just making stuff up. That’s been my experience.

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Adding to the language section, it’s also really good at guessing words if you give it a decent definition. I think this has other applications but it’s quite useful for people like me with the occasionally leaky brain.

  • IchNichtenLichten@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Machine learning is important in healthcare and it’s going to get better and better. If you train an algorithm on two sets of data where one is a collection of normal scans and the other from patients with an abnormality, it’s often more accurate than a medical professional in sorting new scans.

    As for the fancy chatbot side of things, I suspect it’s only going to lead to a bunch of middle management dickheads believing they can lay off staff until the inevitable happens and it blows up in their faces.