Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • DeathsEmbrace@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Ruin the marketing. I want them to stop using the key term AI and use the appropriate terminology narrow minded AI. It needs input so let’s stop making up fantasy’s about AI it’s bullshit in truth.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      The term artificial intelligence is broader than many people realize. It doesn’t refer to a single technology or a specific capability, but rather to a category of systems designed to perform tasks that would normally require human intelligence. That includes everything from pattern recognition, language understanding, and problem-solving to more specific applications like recommendation engines or image generation.

      When people say something “isn’t real AI,” they’re often working from a very narrow or futuristic definition - usually something like human-level general intelligence or conscious reasoning. But that’s not how the term has been used in computer science or industry. A chess-playing algorithm, a spam filter, and a large language model can all fall under the AI umbrella. The boundaries of AI shift over time: what once seemed like cutting-edge intelligence often becomes mundane as we get used to it.

      So rather than being a misleading or purely marketing term, AI is just a broad label we’ve used for decades to describe machines that do things we associate with intelligent behavior. The key is to be specific about which kind of AI we’re talking about - like “machine learning,” “neural networks,” or “generative models” - rather than assuming there’s one single thing that AI is or isn’t.

      • DeathsEmbrace@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        10 days ago

        All of this is permutation based coding don’t bullshit me. AI is being used for an MLM scam.

  • Bwaz@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I’d like there to be a web-wide expectation by everyone that any AI generated text, comment, story or image be clearly marked as being AI. That people would feel incensed and angry when it wasn’t labeled so. Rather than wondering whether there were a person with a soul producing the content, or losing faith that real info could be found online.

  • Paradachshund@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.

    Failing that, any models built on stolen work should be released to the public for free.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      Genuine curiosity. Not an attack. Did you download music illegally back in the day? Or torrent things? Do you feel the same about those copyrighted materials?

    • pelespirit@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone’s art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don’t care if they learned it from 10,000 people, list them.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      12 days ago

      Definitely need copyright laws. What if everything has to be watermarked in some way and it’s illegal to use AI generated content for commercial use unless permitted by creators?

      • Paradachshund@lemmy.today
        link
        fedilink
        arrow-up
        0
        ·
        11 days ago

        The problem with trying to police the output is there isn’t a surefire way to detect the fact it’s generated. That’s why I prefer targeting the companies who created the problematic models.

        • venusaur@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          But let’s say the model is released for free but people use it for commercial purposes. It seems the only solution is to mandate that all content a model is trained on and accesses has provided express permission or is original content. Nobody can release a model to the public which generates content based on “illegal” material.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.

      I’m going to ask the tough question: Why?

      Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.

      Copyright law lets you download whatever TF you want. It isn’t until you distribute said copyrighted material that you violate copyright law.

      Before generative AI, Google screwed around internally with all those copyrighted works in dozens of different ways. They never asked permission from any of those copyright holders.

      Why is that OK but doing the same with generative AI is not? I mean, really think about it! I’m not being ridiculous here, this is a serious distinction.

      If OpenAI did all the same downloading of copyrighted content as Google and screwed around with it internally to train AI then never released a service to the public would that be different?

      If I’m an artist that makes paintings and someone pays me to copy someone else’s copyrighted work. That’s on me to make sure I don’t do that. It’s not really the problem of the person that hired me to do it unless they distribute the work.

      However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?

      If you believe that it’s not Xerox’s problem then you’re on the side of the AI companies. Because those companies that make LLMs available to the public aren’t actually distributing copyrighted works. They are, however, providing a tool that can do that (sort of). Just like a copier.

      If you paid someone to study a million books and write a novel in the style of some other author you have not violated and law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?

      My argument is that there’s absolutely nothing illegal about it. They’re clearly not distributing copyrighted works. Not intentionally, anyway. That’s on the user. If someone constructs a prompt with the intention of copying something as closely as possible… To me, that is no different than walking up to a copier with a book. You’re using a general-purpose tool specifically to do something that’s potentially illegal.

      So the real question is this: Do we treat generative AI like a copier or do we treat it like an artist?

      If you’re just angry that AI is taking people’s jobs say that! Don’t beat around the bush with nonsense arguments about using works without permission… Because that’s how search engines (and many other things) work. When it comes to using copyrighted works, not everything requires consent.

      • lakemalcom10@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?

        I think this is intentionally missing the point.

        LLMs don’t actually think, or produce original ideas. If the human artist produces a work that too closely resembles a copyrighted work, then they will be subject to those laws. LLMs are not capable of producing new works, by definition they are 100% derivative. But their methods in doing so intentionally obfuscate attribution and allow anyone to flood a space with works that require actual humans to identify the copyright violations.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        Like the other comments say, LLMs (the thing you’re calling AI) don’t think. They aren’t intelligent. If I steal other people’s work and copy pieces of it and distribute it as if I made it, that’s wrong. That’s all LLMs are doing. They aren’t “being inspired” or anything like that. That requires thought. They are copying data and creating outputs based on weights that tell it how and where to put copied material.

        I think the largest issue is people hearing the term “AI” and taking it at face value. There’s no intelligence, only an algorithm. It’s a convoluted algorithm that is hard to tell what going on just by looking at it, but it is an algorithm. There are no thoughts, only weights that are trained on data to generate predictable outputs based on given inputs. If I write an algorithm that steals art and reorganizes into unique pieces, that’s still stealing their art.

        For a current example, the stuff going on with Marathon is pretty universally agreed upon to be bad and wrong. However, you’re arguing if it was an LLM that copied the artist’s work into their product it would be fine. That doesn’t seem reasonable, does it?

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          11 days ago

          My argument is that the LLM is just a tool. It’s up to the person that used that tool to check for copyright infringement. Not the maker of the tool.

          Big company LLMs were trained on hundreds of millions of books. They’re using an algorithm that’s built on that training. To say that their output is somehow a derivative of hundreds of millions of works is true! However, how do you decide the amount you have to pay each author for that output? Because they don’t have to pay for the input; only the distribution matters.

          My argument is that is far too diluted to matter. Far too many books were used to train it.

          If you train an AI with Stephen King’s works and nothing else then yeah: Maybe you have a copyright argument to make when you distribute the output of that LLM. But even then, probably not because it’s not going to be that identical. It’ll just be similar. You can’t copyright a style.

          Having said that, with the right prompt it would be easy to use that Stephen King LLM to violate his copyright. The point I’m making is that until someone actually does use such a prompt no copyright violation has occurred. Even then, until it is distributed publicly it really isn’t anything of consequence.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            arrow-up
            0
            ·
            11 days ago

            My argument is that the LLM is just a tool. It’s up to the person that used that tool to check for copyright infringement. Not the maker of the tool.

            Build an inkjet printer exclusively out of stolen parts from HP, Brother, and Epson and marketed as being so good that experts can’t differentiate what they print from legal currency (except sometimes it adds cartoonish moustaches). Start selling it in retail stores alongside them. They would battery be announced, much less stocked on the shelves before C&D letters and/or arrest warrants arrived.

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 days ago

            I run local models. The other day I was writing some code and needed to implement simplex noise, and LLMs are great for writing all the boilerplate stuff. I asked it to do it, and it did alright although I had to modify it to make it actually work because it hallucinated some stuff. I decided to look it up online, and it was practically an exact copy of this, down to identical comments and everything.

            It is not too diluted to matter. You just don’t have the knowledge to recognize what it copies.

      • lakemalcom10@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?

        This is false equivalence

        LLMs do not wholesale reproduce an original work in it’s original form, they make it easy to mass produce a slightly altered form without any way to identify the original attribution.

      • lakemalcom10@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.

        No they don’t. They index the content of the page and score its relevance and reliability, and still provide the end user with the actual original information

  • Justdaveisfine@midwest.social
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    I would likely have different thoughts on it if I (and others) was able to consent my data into training it, or consent to even have it rather than it just showing up in an unwanted update.

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

    I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

    Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

    Every step of any deductive process needs to be citable and traceable.

    • Maeve@kbin.earth
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

      Their creators can’t even keep them from deliberately lying.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      … I want clear evidence that the LLM … will never hallucinate or make something up.

      Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.

      • BertramDitore@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Let’s say I open a medical textbook a few different times to find the answer to something concrete, and each time the same reference material leads me to a different answer but every answer it provides is wrong but confidently passes it off as right. Then yes, that medical textbook should be banned.

        Quality control is incredibly important, especially when people will use these systems to make potentially life-changing decisions for them.

        • davidgro@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          11 days ago

          especially when people will use these systems to make potentially life-changing decisions for them.

          That specifically is the problem. I don’t have a solution, but treating and advertising these things like they think and know stuff is a mistake that of course the companies behind them are encouraging.

      • mosiacmango@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        12 days ago

        If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.

        The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”

        • Redex@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          11 days ago

          The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.

          • mosiacmango@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            11 days ago

            That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.

            They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.

            • davidgro@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              11 days ago

              There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.

              Even less existent is complete data.

              • mosiacmango@lemm.ee
                link
                fedilink
                arrow-up
                0
                ·
                11 days ago

                Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.

                They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.

                It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      This is awesome! The citing and tracing is already improving. I feel like no hallucinations is gonna be a while.

      How does it all get enforced? FTC? How does this become reality?

    • DomeGuy@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

  • audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    If we’re talking realm of pure fantasy: destroy it.

    I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.

    But a lot of sci-fi doesn’t really address the run up to AI, in fact a lot of it just kind of assumes there’ll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.

    Put it out of its misery.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      How do you “destroy it”? I mean, you can download an open source model to your computer right now in like five minutes. It’s not Skynet, you can’t just physically blow it up.

      • Jeffool @lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        OP asked what people wanted to happen, and even later “destroy gen AI” as an option. I get it is not realistically feasible, but it’s certainly within the realm of options provided for the discussion. No need to police their pie in the sky dream. I’m sure they realize it’s not realistic.

  • Glitch@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    I don’t dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that

  • Sunflier@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 days ago

    Disable all ai being on by default. Offer me a way to opt into having ai, but don’t shove it down my throat by default. I don’t want google ai listening in on my calls without having the option to disable it. I am an attorney, and many of my calls are privileged. Having a third party listen in could cause that privilege to be lost.

    I want ai that is stupid. I live in a capitalist plutocracy that is replacing workers with ai as fast and hard as possible without having ubi. I live in the United States, which doesn’t even have universal health insurance. So, ubi is fucked. This sets up the environment where a lot of people will be unemployable through no fault of their own because of ai. Thus without ubi, we’re back to starvation and hoovervilles. But, fuck us. They got theirs.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    The technology side of generative AI is fine. It’s interesting and promising technology.

    The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.

    We need legislation to catch up. We also need society to be able to catch up. We can’t let the AI bros continue to foist more “helpful tools” on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.

    • theherk@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      I agree, but I’d take it a step further and say we need legislation to far surpass the current conditions. For instance, I think it should be governments leading the charge in this field, as a matter of societal progress and national security.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.

    The solution is easy and right there in front of everyone’s eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.

    This will literally solve every single problem people have other than energy use which is a fake problem to begin with.

  • rekabis@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    11 days ago

    Of the AI that are forced to serve up a response (almost all publicly available AI), they resort to hallucinating gratuitously in order to conform to their mandate. As in, they do everything they can in order to provide some sort of a response/answer, even if it’s wildly wrong.

    Other AI that do not have this constraint (medical imaging diagnosis, for example) do not hallucinate in the least, and provide near-100% accurate responses. Because for them, the are not being forced to provide a response, regardless of the viability of the answer.

    I don’t avoid AI because it is bad.

    I avoid AI because it is so shackled that it has no choice but to hallucinate gratuitously, and make far more work for me than if I just did everything myself the long and hard way.

    • Tessellecta@feddit.nl
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      I don’t think that the forcing of an answer is the source of the problem you’re describing. The source actually lies in the problems that the AI is taught to solve and the data it is provided to solve the problem.

      In the case of medical image analysis, the problems are always very narrowly defined (e.g. segmenting the liver from an MRI image of scanner xyz made with protecol abc) and the training data is of very high quality. If the model will be used in the clinic, you also need to prove how well it works.

      For modern AI chatbots the problem is: add one word to the end of the sentence starting with a system prompt, the data provided is whatever they could get on the internet, and the quality controle is: if it sounds good it is good.

      Comparing the two problems it is easy to see why AI chatbots are prone to hallucination.

      The actual power of the LLMs on the market is not as glorified google, but as foundational models that are used as pretraining for actual problems people want to solve.

  • 𝕱𝖎𝖗𝖊𝖜𝖎𝖙𝖈𝖍@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    12 days ago

    I’m perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!

    It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I’d say that’ll happen when pigs fly, but police choppers exist so idk

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    get rid of it, nobody wants it or needs it, and should only be offered as a service to niche industries. phones, places like youtube do not need the slop. its not ready for medical screening/scans, as it can easily make mistakes.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.

    If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.

    I just ask for the law to be applied to all equally. What a surprising concept…

  • SuperNovaStar@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    AI overall? Generally pro. LLMs and generative AI, though, I’m “against”, mostly meaning that I think it’s misused.

    Not sure what the answer is, tbh. Reigning in corporations would be good.

    I do think we as a society need to radically alter our relationship to IP law. Right now we ‘enforce’ IP law in a way that benefits corporations but not individuals. We should either get rid of IP law altogether (which would protect people from corporations abusing the laws) or we should enforce it more strictly, and actually hold corporations accountable for breaking it.

    If we fixed that, I think gen AI would be fine. But we aren’t doing that.