• Truscape@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    24 days ago

    Distributed platform owned by no one founded by people who support individual control of data and content access

    Majority of users are proponents of owning what one makes and supporting those who create art and entertainment

    AI industry shits on above comments by harvesting private data and creative work without consent or compensation, along with being a money, energy, and attention tar pit

    Buddy, do you know what you’re here for? Or are you yet another bot lost in the shuffle?

    • dactylotheca@suppo.fi
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      Or are you yet another bot lost in the shuffle?

      Yes, good job, anybody with opinions you don’t like is a bot.

      It’s not like this was even a pro-AI post rather than just pointing out that even the most facile “ai bad, applause please” stuff will get massively upvoted

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        0
        ·
        24 days ago

        Yeah, I guess that was a bit too far, posted before I checked the user history or really gave it time to sit in my head.

        Still, this kind of meme is usually used to imply that the comment is just a trend rather than a legitimate statement.

        • Voyajer@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          24 days ago

          Why would you lend and credence to the weakest appeal to the masses presented on the site?

        • dactylotheca@suppo.fi
          link
          fedilink
          English
          arrow-up
          0
          ·
          24 days ago

          HaVe YoU ConSiDeReD thE PoSSiBiLiTY that I’m not pro-AI and I understand the downsides, and can still point out that people flock like lemmings (*badum tss*) to any “AI bad” post regardless of whether it’s actually good or not?

          • Doll_Tow_Jet-ski@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            Ok, so your point is: Look! People massively agree with an idea that makes sense and it’s true.

            Color me surprised…

          • grrgyle@slrpnk.net
            link
            fedilink
            arrow-up
            0
            ·
            24 days ago

            Why would a post need to be good? It just needs a good point. Like this post is good enough, even if I don’t agree that we have enough facile ai = posts.

            Depends on the community, but for most of them pointing out ways that ai is bad is probably relevant, welcome, and typical.

  • razorcandy@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    There are valid reasons for disliking AI (rather, how it’s being used) and I’ll upvote when a relevant, informed argument is made against it. Otherwise I’ll mentally filter out the low-effort comments that just say “fuck AI” with dozens of upvotes.

    • frog@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      Why the hell are you being downvoted? You are completely right.

      People will look back at this and “hover boards” and will think “are they stupid!?”

      Mislabeling a product isn’t great marketing, it’s false advertisement.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        AI is an umbrella term that holds many thing. We have been referring to simple path finding algorithms in video games as AI for two decades, llms are AIs.

        • frog@feddit.uk
          link
          fedilink
          arrow-up
          0
          ·
          23 days ago

          There is a distinction between video game AI and computer science AI. People know that video game AI isn’t really AI. How LLM is marketed by using terms like “super intelligence” is deception.

          No one is typing prompts out to NPC asking if dogs can eat chocolate.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            Calling an llm an AI isnt saying it’s super intelligent and I don’t know of any company that it is marketing it like that. There aren’t multiple definitions of AI depending on the industry you are in.

            Just read the wiki, it is pretty clear. Something does not have to be “intelligent” to be considered AI, just like a shooting star isn’t actually a star. Its an umbrella term that holds many things including video game pathfinding, llms, recommendation systems, autonomous driving solutions, etc.

        • occultist8128@infosec.pub
          link
          fedilink
          arrow-up
          0
          ·
          23 days ago

          Yes LLMs are AI, who don’t agree with this is stupid, sorry. But saying AI means LLM is wrong. Kindly take a look on my reason here why I always against to language misusage https://infosec.pub/comment/17417999. You English speakers sometimes make things harder to understand by misusing terms like ‘literally’ or using ‘AI’ to mean only LLMs. Language is meant to clarify, not confuse, and this shift in meaning can lead to misunderstandings, especially when talking about technical concepts.

      • occultist8128@infosec.pub
        link
        fedilink
        arrow-up
        0
        ·
        24 days ago

        IDK LMAO, that’s what I really hate about Reddit/Lemmy, the voting system. People downvote but don’t tell where I’m wrong in their opinion. I mean, at least argue — say out loud your (supposedly harmless) opinion. I even added a disclaimer there that I don’t promote LLM and such stuff. I don’t really care either, I stand with correctness and do what I can to correct what is wrong. I totally agree with @sentient_loom@sh.itjust.works tho.

      • occultist8128@infosec.pub
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        24 days ago

        Yes and it’s wrong either. Now most people here are also falling in the same hole. I’m here not to promote/support/standing with LLM or Gen-AI, I want to correct what is wrong. You can hate something but please, be objective and rational.

          • occultist8128@infosec.pub
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            It’s partially correct but AI don’t always mean it’s LLM. Etymology is important here. Don’t normalize illiteracy.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              arrow-up
              0
              ·
              23 days ago

              This is how etymology works.

              Do you think all the words we use today meant exactly the same thing 300 years ago?
              No, people used it “incorrectly” and that usage gains popularity, and that makes it correct.

              What you call illiteracy is literally how etymology works.

              • occultist8128@infosec.pub
                link
                fedilink
                arrow-up
                0
                ·
                23 days ago

                Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?

                I come from a technical background and have worked in AI to help people and small businesses whether it’s for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that ‘AI’ is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word ‘literally’ as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word ‘literally’ in Indonesian would be translated as ‘secara harfiah,’ and when used, it means exactly as stated. Just like ‘literally,’ words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  23 days ago

                  Depending on context, jargon and terminology change.
                  In this context, I’d agree that LLMs are a subset tech under the umbrella term “AI”. But in common English discourse, LLM and AI are often used interchangeably. That’s not wrong because correctness is defined by the actual real usage of native speakers of the language.

                  I also come from a tech background. I’m a developer with 15 years experience, and I work for a large company, and my job is currently integrating LLMs and more traditional ML models into our products, because our shareholders think we need to.
                  Specificity is useful in technical contexts, but in these public contexts, almost everyone knows what we’re talking about, so the way we’re using language is fine.

                  You know it’s bad when someone with my username thinks you’re being too pedantic lol. Dont be a language prescriptivist.

          • Sentient Loom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            23 days ago

            Not really, since “AI” is a pre-existing and MUCH more general term which has been intentionally commandeered by bad actors to mean a particular type of AI.

            AI remains a broader field of study.

            • occultist8128@infosec.pub
              link
              fedilink
              arrow-up
              0
              ·
              23 days ago

              I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.

                • occultist8128@infosec.pub
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  23 days ago

                  As a non-English main, Deepl is useful for my locals (and for me). It’s just how it’s implemented. Still being open-minded, yeah, the extensive resource usage is bad for the earth tho, wishing there would be optimization.

              • PeriodicallyPedantic@lemmy.ca
                link
                fedilink
                arrow-up
                0
                ·
                22 days ago

                I don’t get to decide if the marketing terms used by the companies I hate end up becoming the common terms.

                If I stubbornly refuse to use the common terms and instead only use the technical terms, then I’m only limiting the reach of my message.

                OpenAI marketing has successfully made LLM one of the definitions of the term AI, and the most common term used to refer to the tech, in public spaces.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              arrow-up
              0
              ·
              22 days ago

              It doesn’t matter what you want, I’m just describing how language works.

              If everyone says a word means a thing, then it means that thing. Words can have multiple meanings.

              • Sentient Loom@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                22 days ago

                AI remains a broader field of study, an active field of study which tons of people are invested in, and they use AI to refer to the broader field of study in which they’re professionally invested.

                I’m just describing how language works.

                No you’re not. And you’re not as smart as you think you are.

                If everyone says a word means a thing

                It’s not literally everybody, and you know it, and you also know that LLMs are not the entire actual category of AI.

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  21 days ago

                  That is beyond pedantry.

                  That is how language works. Word definitions are literally just informal consensus agreement. Dictionaries are just descriptions of observed usage. Not literally everyone needs to agree on it.
                  This isn’t some kind of independent conclusion I came to on my own; I used to think like you appear to, but then I watched some explanations from authors and from professional linguists, and they changed my mind about language prescriptivism.

                  If you say “AI” in most contexts, more people will know what you mean than if you say “LLM”. If your goal is communication, then by that measure “AI” is “more correct” (but again, correctness isn’t even applicable here)

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    Now see, I like the idea of AI.

    What I don’t like are the implications, and the current reality of AI.

    I see businesses embracing AI without fully understanding the limits. Stopping the hiring juniors developers, often firing large numbers of seniors because they think AI, a group of cheap post grad vibe programmers and a handful of seasoned seniors will equal the workforce they got rid of when AI, while very good is not ready to sustain this. It is destroying the career progression for the industry and even if/when they realise it was a mistake, it might already have devastated the industry by then.

    I see the large tech companies tearing through the web illegally sucking up anything they can access to pull into their ever more costly models with zero regard to the effects on the economy, the cost to the servers they are hitting, or the environment from the huge power draw creating these models requires.

    It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.

    • I have to disagree that it’s even a nice idea. The “idea” behind AI appears to be wanting a machine that thinks or works for you with (at least) the intelligence of a human being and no will or desires of its own. At its root, this is the same drive behind chattel slavery, which leads to a pretty inescapable conundrum: either AI is illusory marketing BS or it’s the rebirth of one of the worst atrocities history has ever seen. Personally, hard pass on either one.

    • SubArcticTundra@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.

      You’re right. It’s the business model driving technological advancement in the 21st century that’s flawed.

    • ☂️-@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      i see a silver lining.

      i love IT but hate IT jobs, here’s hoping techbros just fucking destroy themselves…

    • WanderingThoughts@europe.pub
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      And the whole AI industry is holding up the stock market, while AI has historically always ran the hype cycle and crashed into an AI winter. Stock markets do crash after billions pumped into a sector suddenly turn out to be not worth as much. Almost none of these AI companies run a profit and don’t have any prospect of becoming profitable. It’s when everybody starts yelling that this time it’s different that things really become dangerous.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        and don’t have any prospect of becoming profitable

        There’s a real twist here in regards to OpenAI.

        They have some kind of weird corporate structure where OpenAI is a non-profit and it owns a for-profit arm. But, the deal they have with Softbank is that they have to transition to a for-profit by the end of the year or they lose out on the $40 billion Softbank invested. If they don’t manage to do that, Softbank can withhold something like $20B of the $40B which would be catastrophic for OpenAI. Transitioning to a For-Profit is not something that can realistically be done by the end of the year, even if everybody agreed on that transition, and key people don’t agree on it.

        The whole bubble is going to pop soon, IMO.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        24 days ago

        Yep, exactly.

        They knew the housing/real estate bubble would pop, as it currently is…

        … So, they made one final last gambit on AI as the final bubble that would magically become super intelligent and solve literally all problems.

        This would never, and is not working, because the underlying tech of LLM has no real actual mechanism by which it would or could develop complex, critical, logical analysis / theoretization.

        LLMs are fancy, inefficient autocomplete algos.

        Thats it.

        They achieve a simulation of knowledge via consensus, not analytic review.

        They can never be more intelligent than an average human with access to all the data they’ve … mostly illegally stolen.

        The entire bet was ‘maybe superintelligence will somehow be an emergent property, just give 8t more data and compute power’.

        And then they did that, and it didn’t work.

          • WanderingThoughts@europe.pub
            link
            fedilink
            arrow-up
            0
            ·
            24 days ago

            That too is the classical hype cycle. After the trough of disillusionment, and that’s going to be a deep one from the look of things, people figure out where it can be used in a profitable way in its own niches.

            • sp3ctr4l@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              24 days ago

              … Unless its mass proliferation of shitty broken code and mis/disinformation and hyperparasocial relationships and waste of energy and water are actually such a net negative that it fundamentally undermines infrastructure and society, thus raising the necessary profit margin too high for such legit use cases to be workable in a now broken economic system.

              • someacnt@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                22 days ago

                The world revolves around the profit margin, so the current trend may even continue indefinitely… Sad.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              24 days ago

              Time will tell how much was just hype, and how much actually had merit. I think it will go the way of the .com bubble.

              LOTS of uses for the internet of things, but it’s still overhyped

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  23 days ago

                  Fair enough.

                  The dot-com bubble (late 1990s–2000) was when investors massively overvalued internet-related companies just because they had “.com” in their name, even if they had no profits or solid business plans. It burst in 2000, wiping out trillions in value.

                  The “Internet hype” bubble popped. But the Internet still has many valid uses.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            24 days ago

            I mean, I also agree with that, lol.

            There absolutely are valid use cases for this kind of ‘AI’.

            But it is very, very far from the universal panacea that the capital class seems to think it is.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              24 days ago

              When all the hype dies down, we will see where it’s actually useful. But I can bet you it will have uses, it’s been very helpful in making certain aspects of my life a lot easier. And I know many who say the same.

  • RushLana@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    How people dare not like the automatic bullshit machine pushed down their troat…

    Seriously, genrative AI acomplishment are :

    • Making mass spam easier
    • Burning the planet
    • Making people lose their job and not even being a decent solution
    • Make all search engine and information sources worse
    • Creating an economic bubble that will fuckup the economy even harder
    • Easing mass surveillance and weakening privacy everywhere
      • RushLana@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        we should allow lead in paint its easier to use /s

        You are deliberatly missing my point which is : gen AI as an enormous amount of downside and no real world use.

    • mechoman444@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      Yes. AI can be used for spam, job cuts, and creepy surveillance, no argument there, but pretending it’s nothing more than a corporate scam machine is just lazy cynicism. This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors, giving deaf people real-time conversations through instant transcription, translating entire languages on the fly, mapping wildfire and flood zones so first responders know exactly where to go, accelerating scientific breakthroughs from climate modeling to space exploration, and cutting out the kind of tedious grunt work that wastes millions of human hours a day. The problem isn’t that AI exists, it’s that a lot of powerful people use it selfishly and irresponsibly. Blaming the tech instead of demanding better governance is like blaming the printing press for bad propaganda.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        24 days ago

        This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors

        Not the same kind of AI. At all. Generative AI vendors love this motte-and-bailey.

      • atopi@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        Arent those different types of AI?

        I dont think anyone hating AI is referring to the code that makes enemies move, or sort things into categories

        • mechoman444@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          22 days ago

          LLMs aren’t artificial intelligence in any way.

          They’re extremely complex and very smart prediction engines.

          The term artificial intelligence has been co-opted in hijacked for marketing purposes a long time ago.

          The kind of AI that in general people expect to see is a fully autonomous self-aware machine.

          If anyone has used any llm for any extended period of time they will know immediately that they’re not that smart even chatgpt arguably the smartest of them all is still highly incapable.

          What we do have to come to terms with is that these llms do have an application they have function and they are useful and they can be used in a deleterious way just like any technology at all.

          • atopi@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            22 days ago

            If a program that can predict prices for video games based on reviews and how many people bought it can be called AI long before 2021, LLMs can too

    • Ek-Hou-Van-Braai@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      24 days ago

      One could have said many of the same thigs about a lot of new technologies.

      The Internet, Nuclear, Rockets, Airplanes etc.

      Any new disruptive technology comes with drawbacks and can be used for evil.

      But that doesn’t mean it’s all bad, or that it doesn’t have its uses.

      • RushLana@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        24 days ago

        Give me one real world use that is worth the downside.

        As dev I can already tell you it’s not coding or around code. Project get spamed with low quality nonsensical bug repport, ai generated code rarely work and doesn’t integrate well ( on top on pushing all the work on the reviewer wich is already the hardest part of coding ) and ai written documentation is ridled with errors and is not legible.

        And even if ai was remotly good at something it still the equivalent of a microwave trying to replace the entire restaurant kitchen.

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          24 days ago

          I can run a small LLM locally which I can talk using voice to turn certain lights on and off, set reminders for me, play music etc.

          There are MANY examples of LLM’s being useful, it has its drawbacks just like any big technology, but saying it has no uses that aren’t worth it, is ridiculous.

          • Rampsquatch@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.

            Neat trick, but it’s not worth the headache of set up when you can do all that by getting off your chair and pushing buttons. Hell, you don’t even have to get off your chair! A cellphone can do all that already, and you don’t even need voice commands to do it.

            Are you able to give any actual examples of a good use of an LLM?

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              23 days ago

              Like it or not, that is an actual example.

              I can lay in my bed and turn off the lights without touching my phone, or turn on certain muisic without touching my phone.

              I could ask if I remembered to lock the front door etc.

              But okay, I’ll play your game, let’s pretend that doesn’t count.

              I can use my local AI to draft documents or emails speeding up the process a lot.

              Or I can used it to translate.

              • Rampsquatch@sh.itjust.works
                link
                fedilink
                arrow-up
                0
                ·
                23 days ago

                If you want to live your life like that, go for that’s your choice. But I don’t think those applications are worth the cost of running an LLM. To be honest I find it frivolous.

                I’m not against LLMs as a concept, but the way they get shoved into everything without thought and without an “AI” free option is absurd. There are good reasons why people have a knee-jerk anti-AI reaction, even if they can’t articulate it themselves.

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  23 days ago

                  It’s not expensive for me to run a local LLM, I just use the hardware I’m already using for gaming. Electricity is cheap and most people with a gaming PC probably use more electricity gaming than they would running their own LLM and asking it some questions.

                  I’m also against shoving AI in evening, and not making it Opt-In. I’m also worried about privacy and concentration of power etc.

                  But just outright saying LLMs are bad is rediculous.

                  And saying there is no good reason to use them is rediculous. Can we stop doing that.

          • RushLana@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            23 days ago

            But we could do vocal assistants well before LLMs (look at siri) and without setting everything on fire.

            And seriously, I asked for something that’s worth all the down side and you bring up clippy 2.0 ???

            Where are the MANY exemples ? why are LLMs/genAI company burning money ? where are the companies making use of of the suposedly many uses ?

            I genuily want to understand.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              23 days ago

              You asked for one example, I gave you one.

              It’s not just voice, I can ask it complex questions and it can understand context and put on lights or close blinds based on that context.

              I find it very useful with no real drawbacks

              • JcbAzPx@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                22 days ago

                The fact that was the best you could come up with is far more damning than not even having one.

              • RushLana@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                23 days ago

                I ask for an example making up for the downside everyone as to pay.

                so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it. And again that’s stuff siri and domotic tools where able to do since 2014 at a minimum.

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  23 days ago

                  Siri has privacy issues, and only works when connected to the internet.

                  What are the downsides of me running my own local LLM? I’ve named many benefits privacy being one of them.

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            That’s like saying “asbestos has some good uses, so we should just give every household a big pile of it without any training or PPE”

            It doesn’t matter that it has some good uses and that later we went “oops, maybe let’s only give it to experts to use”. The harm has already been done by eager supporters, intentional or not.

              • PeriodicallyPedantic@lemmy.ca
                link
                fedilink
                arrow-up
                0
                ·
                22 days ago

                It’s not a strawman, it’s hyperbole.

                There are serious known harms and we suspect that there are more.
                There are known ethical issues, and there may be more.
                There are few known benefits, but we suspect that there are more.

                Do we just knowingly subject untrained people to harm just to see if there are a few more positive usecases, and to make shareholders a bit more money?
                How does their argument differ from that?

      • PeriodicallyPedantic@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        Of those, only the internet was turned loose on an unsuspecting public, and they had decades of the faucet slowly being opened, to prepare.

        Can you imagine if after WW2, Werner Von Braun came to the USA and then just like… Gave every man woman and child a rocket, with no training? Good and evil wouldn’t even come into, it’d be chaos and destruction.

        Imagine if every household got a nuclear reactor to power it, but none of the people in the household got any training in how to care for it.

        It’s not a matter of good and evil, it’s a matter of harm.

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 days ago

          The Internet kind of was turned lose on an unsuspecting public. Social media has and still is causing a lot of harm.

          Did you really compare every household having a nuclear reactor with people having access to AI?

          How’s is that even remotely a fair comparison.

          To me the Internet being released on people and AI being released on people is more of a fair comparison.

          Both can do lots of harm and good, both will probably cost a lot of people their jobs etc.

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            You know that the public got trickle-fed the internet for decades before it was ubiquitous in everyone house, and then another decade before it was ubiquitous in everyone’s pocket. People had literal decades to learn how to protect themselves and for the job market to adjust. During that time, there was lots of research and information on how to protect yourself, and although regulation mostly failed to do anything, the learning material was adapted for all ages and was promoted.

            Meanwhile LLMs are at least as impactful as the internet, and were released to the public almost without notice. Research on it’s affects is being done now that it’s already too late, and the public doesn’t have any tools to protect itself. What meager material in appropriate use exist hasn’t been well researched not adapted to all ages, when it isn’t being presented as “the insane thoughts of doomer Luddites, not to be taken seriously” by the AI supporters.

            The point is that people are being handed this catastrophically dangerous tool, without any training or even research into what the training should be. And we expect everything to be fine just because the tool is easy to use and convenient?

            These companies are being allowed to bulldoze not just the economy, and the mental resilience of entire generations, for the sake of a bit of shareholder profit.

    • Ek-Hou-Van-Braai@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      24 days ago

      He’s mane the World wake up to the fact that they can’t trust the US, so that can be seen as good?

      AI isn’t that black and white, just like any big technology it can be used for good or bad.

      Just like Airplanes

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          24 days ago

          I used that comparison a total of two times (and might use it more), how about refute my argument instead of getting mad at me for using a good comparison twice.

          Airplanes emit SHITLOADS of carbon into the atmosphere, they have directly cased the death of tens of thousands of people. Airplanes are heavily used in war and to spy on people. Airplanes are literally used to spray pesticides and other chemicals into the air etc. They can mostly just be used by the rich etc.

          Just like with AI, there are many reasons airplanes are bad, that doesn’t mean we should get rid of them.

          • Ifera@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            A based point of view, Bravo, my dear. Do you know how rare that is? People in here love to think about themselves as free thinkers, when a lot of them are in reality, reactionary at best.

            Same for citing renting, landlords and Ai. They are disgustingly evil when used for profit, but they also have their uses. In another comment I’m sure will be downvoted to hell, if not outright buried, I mention the uses of GenAI for translation, text simplification, summarization and studying, yet people got the whole “AI=BAD” as a thought-terminating cliche.

    • frog@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      Yeah. I hate the naming of it too. It’s not AI in the sense how science fiction saw it. History repeats itself in the name of marketing. I’m still very annoyed with these marketers destroying the term “hover board”.

        • frog@feddit.uk
          link
          fedilink
          arrow-up
          0
          ·
          23 days ago

          There is a distinction between video game AI and computer science AI. People know that video game AI isn’t really AI. How LLM is marketed by using terms like “super intelligence” is deception.

          • atopi@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            23 days ago

            The broadest definition of AI i have found includes both video game AI and machine learning

            The point i was trying to make was that LLMs are still AI, even if their marketing is misleading

  • Tetsuo@jlai.lu
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    An AI could be demonstrably 30 times more accurate than a human in diagnosing a cancer on a scan Lemmy would still shit on it because it’s an IA :D.

    On Reddit I knew that the subject of gun control was not allowed to be talked about. Now I embraced Lemmy and I can’t talk no matter what about AI. It’s just a taboo subject. Apparently some people want to reject the tech entirely and think it will somehow just magically stay out of their lives. A very naive dream.

    So yeah Lemmy. Refuse the conversation, look away, I’m sure it will be fine.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      Think about your argument for a minute.

      I know you think this will harm you and everyone you know, but it’ll be much better if you just stay quiet instead of vocally opposing it

      When has that ever been good advice?

      • Tetsuo@jlai.lu
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        So everything related to AI is negative ?

        If so do you understand why we can’t have any conversation on the subject ?

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          23 days ago

          Did I say that?
          Show me the place where I said that. Show it to me.
          Come on. Show me the place where I said everything related to AI is negative. Show me even a place where you could reasonably construe that’s what I meant.

          If you’re talking about why we can’t have a conversation, take a long hard look in the fucking mirror you goddamn hypocrite.

          • Tetsuo@jlai.lu
            link
            fedilink
            arrow-up
            0
            ·
            22 days ago

            First you should chill a bit.

            I know you think this will harm you and everyone you know,

            So this thing will harm you. But you are not describing AI as only negative ?

            You just say AI will harm you and suppose people assume you have positive thing to say about AI

            Where is the nuance toward AI in your comment please show it to me.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              arrow-up
              0
              ·
              22 days ago

              You know that things can both harm and benefit you, right? That’s the whole idea behind the idiom “the pros outweigh the cons”.

              If someone is making an argument about the cons of a thing, it’s insane to expect them to just list of a bunch of unrelated pros, and likewise it’s an unreasonable assumption to believe from that, that they don’t believe in the existence of any pros.

              I think that LLMs cause significant harm, and we don’t have any harm mitigation in place to protect us. In light of the serious potential for widespread harm, the pros (of which there are some) dont really matter until we make serious progress in reducing the potential for harm.

              I shouldn’t need this degree of nuance. People shouldn’t need to get warnings in the form of a short novel full of couched language. I’m not the only person in this conversation, the proponents are already presenting the pros. And people should be able to understand that.

              When people were fighting against leaded gasoline, they shouldn’t need to “yes, it makes cars more fuel efficient and prevents potentially damaging engine knock, thereby reducing average maintenance costs” every time they speak about the harms. It is unreasonable to say that they were harming discourse by not acknowledging the benefits every time they cautioned against it’s use.

              I don’t believe that you’re making a genuine argument, I believe you’re trying to stifle criticism by shifting the responsibility for nuance from it’s rightful place in the hands of the people selling and supporting a product with the potential for harm, onto the critics.

              • porksnort@slrpnk.net
                link
                fedilink
                arrow-up
                0
                ·
                18 days ago

                I have to agree here. Injecting ‘nuance’ is an easy way to derail a discussion so that the obvious harms of a thing get obscured. The discussion devolves into emotional reactions to some aspect of the ‘nuance’ and the original point is lost. And nothing changes, which suits the powers that be just fine.

                Nuance is a powerful tool for maintaining the status quo by disrupting the conversation. Leave the nuance to the academics.

                Effective messaging campaigns require message discipline and dead simple provocative points repeated endlessly for a generation or two to effect change, usually.

              • Tetsuo@jlai.lu
                link
                fedilink
                arrow-up
                0
                ·
                22 days ago

                It’s just as insane for you to expect us to read between the line of a 4 line comment that only present AI as harmful.

                If you want to say that AI is both good and bad, that’s fine but then say that. It doesn’t have to be a book about it. You could have juste said AI can be harmful or good and that’s fine. Don’t act as if i’m asking for something unreasonable. You said one negative thing in a 4 line comment and feel personally attacked that we are unable to guess what positive thing you see in AI.

          • chunes@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            22 days ago

            lol, you literally put words in the original commenter’s mouth:

            I know you think this will harm you and everyone you know, but it’ll be much better if you just stay quiet instead of vocally opposing it

            and now you’re incredulous about something similar being done to you? lame

    • Sentient Loom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      An AI could be demonstrably 30 times more accurate than a human in diagnosing a cancer on a scan Lemmy would still shit on it because it’s an IA

      I think this is an exaggeration.

    • mechoman444@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      Thats because platforms like Lemmy and Reddit utilize the bandwagon effect. The upvote/downvote system is inherently flawed because there is no accountability as to why one votes the way they do.

      In this particular case people are just ignorant as to how these new technologies function for example they continue to call them AI when they’re not AI they’re llms… They have no clue how the technology functions or how it should function and simply go by whatever they read on their feed which on lemmy as you know is nothing good.

      • occultist8128@infosec.pub
        link
        fedilink
        arrow-up
        0
        ·
        24 days ago

        In this particular case people are just ignorant as to how these new technologies function for example they continue to call them AI when they’re not AI they’re llms

        You’re my people 👏

    • grrgyle@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      Legitimately useful applications, like in the medical field, are actually brought up as examples of the “right kind” of use case for this technology.

      Over and over again.

      It’s kind annoying, because both the haters of commercial LLM in All The Things and defenders of the same will bring up these exact same use cases as examples of good ai use.

      • Tetsuo@jlai.lu
        link
        fedilink
        arrow-up
        0
        ·
        24 days ago

        May I ask for a link ? Never saw that in the communities I consult. Never. Or at least not above 5 downvotes.

        • grrgyle@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          24 days ago

          I’ll keep an eye out but I don’t have votes visible, so can only really tell sentiment from comments.

          Aside, but I highly recommend hiding vote counts. They’re even more pointless here than they were on redit. They’re meaningless noise on the frontend.

    • grrgyle@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      I prefer the fine vintage of a M$ = bad post, myself.

      Or perhaps even a spicy little Ubuntu = bad post.