I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?

I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.

  • finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I abhor it and I think anybody who does actually like it is using it unethically: for art (which they intend to profit off of), for writing papers or articles, and for writing bad code.

    • Zozano@lemy.lol
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I use it when I get stoned with my mates and think of funny shit to generate.

        • Zozano@lemy.lol
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          I meant to reply to you, to illustrate it’s not always unethical; a point you raised, not OP.

          • finitebanjo@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            It is always unethical and I made it very clear that people like you are people I dislike, so stop trolling and ragebait elsewhere.

            • Zozano@lemy.lol
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I’m not trolling. Why is it unethical for me and my buddies to generate images of a duck with the head of an elephant?

                • Zozano@lemy.lol
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  For the sake of brevity, I’m just going to agree that I’m sea-lioning.

                  Now, explain how what I’m doing is unethical.

                  Do you think it’s unethical because it uses stolen art?

                  If so, I don’t think there’s an issue because I’m not publishing anything I generate.

                  Do you think it’s unethical because of the electricity usage?

                  If so, you could make the argument about any frivilous activity which generates electricity.

                  I really don’t know of a compelling reason besides these two which raises a red flag for you.

          • finitebanjo@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Is this discussion? I added my answer onto the pile for OP’s question. I said I dislike a thing and everyone who uses it, never at any point expressing any uncertainty or confusion on the matter. Then user b said they use it and explained how. If anything they seemed to want to be insulted, and in that sense I was quite nice about it.

            • drake@lemmy.sdf.org
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              Clearly, their intent was to provide an example of a relatively harmless use of AI as a way of demonstrating to you that your position may have been a bit reductive.

              Your reaction, of behaving like, lets be honest, a bit of an asshole, wasn’t really warranted.

              • finitebanjo@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                2 months ago

                I tried really hard not to engage with this obvious bait but I guess you really want it.

                If you require AI to be amused while high then congratulations you’re the most disgruntled creature on the face of this earth. Too bad you’re still paying for a business sampling works without permission or accrediting authors.

                Shit counterpoint by a shit person.

                • drake@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 months ago

                  First of all, most AI tools have some free tier. I doubt the other commenter paid a penny.

                  Also, just because they did it, it doesn’t mean they “required” it… I’ve laughed at cat videos before, that doesn’t mean that I require cat videos to be amused.

    • drake@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I think that you’re right, with the way that our society is structured, it is unethical. It’s essentially the world’s most advanced plagiarism tool.

      However, being realistic, even if no private individual ever used it, it would still exist and would be used by corporations for profit maximising.

      In my opinion, telling people that they’re bad people for using something which is made unethically isn’t really helpful. For example, smartphones aren’t made ethically, but the way to get that to change isn’t to change consumer habits - because we know that just doesn’t work - it’s to get organised, as a collective working class, and take action into our own hands.

      • finitebanjo@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Corpos are currently shooting themselves in the foot by trying to sell an essentially useless product which only lowers the quality of everything it touches.

        I’m sure someday it will replace the press number phone machines, at the cost of accessibility, but otherwise I cannot imagine it “maximising profits”.

        • drake@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Can you seriously not imagine how a corporation could benefit from generative AI, or are you just being obstinate and saying it’s useless because you think it’s unethical and you hope that by saying it’s useless that you can effectively manifest that?

          Because there are plenty of use-cases for generative AI. None of them have to be good, or even products. Your phone machine example is a good one - it’s not a product, really, it’s taking the role of a human to fulfil some obligation, or to intentionally make it harder for people to add to the company’s support burden.

          I think there are some useful applications for generative AI, but I do agree that the incarnations we have are unethical. And again, I really don’t think that simply telling people that they’re bad people for using it is going to win them over to your side.

      • ClamDrinker@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Totally second the latter part - it’s the self destructive nature of being blindly anti-AI. Pretty much everyone would support giving more rights and benefits to people displaced by AI, but only a fraction of that group would support an anti-AI mentality. If you want to work against the negative effects of AI in a way that can actually change things, the solution is not to push against the wall closing in on you, but to find the escape.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I find ChatGPT useful in getting my server to work (since I’m pretty new with Linux)

    Other than that, I check in on how local image models are doing around once every couple of months. I would say you can achieve some cool stuff with it, but not really any unusual stuff.

  • Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    AI is used extensively in science to sift through gigantic data sets. Mechanical turk programs like Galaxy Zoo are used to train the algorithm. And scientists can use it to look at everything in more detail.

    Apart from that AI is just plain fun to play around with. And with the rapid advancements it will probably keep getting more fun.

    Personally I hope to one day have an easy and quick way to sort all the images I have taken over the years. I probably only need a GPU in my server for that one.

  • JohnDClay@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I thought it was pretty fun to play around with making lyrics and rap battles with friends, but I haven’t found a particularly usefull use case.

    • grubbyweasel@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I like asking ChatGPT for movie recommendations. Sometimes it makes some shit up but it usually comes through, I’ve already watched a few flicks I really like that I never would’ve heard of otherwise

    • Mac@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Chat GPT enabled me to automate a small portion of my former job. So that was nice.

    • ccp@lemy.lol
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I tried to give it a fair shake at this, but it didn’t quite cut it for my purposes. I might be pushing it out of its wheelhouse though. My problem is that, while it can rhyme more or less adequately, it seems to have trouble with meter, and when I do this kind of thing, it revolves around rhyme/meter perfectionism. Of course, if I were trying to actually get something done with it instead of just seeing if it’ll come up with something accidentally cool, it would be reasonable to take what it manages to do and refine it. I do understand to some extent how LLMs work, in terms of what tokens are and why this means it can’t play Wordle, etc., and I can imagine this also has something to do with why it’s bad at tightly lining up syllable counts and stress patterns.

      That said, I’ve had LLMs come up with some pretty dank shit when given the chance: https://vgy.me/album/EJ3yPvM0

      Most of it is either the LLMs shitting themselves or GPT doing that masturbatory optimism thing. Da Vinci’s “Suspicious mind…” in the second image is a little bit heavyish though. And those last two (“Gangsterland” and “My name is B-Rabbit, I’m down with M.C.s, and I’m on the microphone spittin’ hot shit”) are god damn funny.

  • CarbonIceDragon@pawb.social
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code “shaders” for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.

  • Venicon@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.

    • lemmylommy@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Sometimes it’s easier to check ChatGPT’s answers, ask follow up questions, look at the sources it provides and live with the occasional hallucinations than to sift through the garbage pile that google search has become.

  • HubertManne@moist.catsweat.com
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn’t. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).

    • Catoblepas@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Might want to rethink the summarization part.

      AI also hasn’t made any huge improvements in machine translation AFAIK. Translators still get hired because AI can’t do the job as well.

      • xep@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Thank you for pointing that out. I don’t use it for anything critical, and it’s been very useful because Kagi’s summarizer works on things like YouTube videos friends link which I don’t care enough to watch. I speak the language pair I use DeepL on, but DeepL often writes more natively than I can. In my anecdotal experience, LLMs have greatly improved the quality of machine translation.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        The AI summaries were judged significantly weaker across all five metrics used by the evaluators, including coherency/consistency, length, and focus on ASIC references. Across the five documents, the AI summaries scored an average total of seven points (on ASIC’s five-category, 15-point scale), compared to 12.2 points for the human summaries.

        The focus on the (now-outdated) Llama2-70B also means that “the results do not necessarily reflect how other models may perform” the authors warn.

        to assess the capability of Generative AI (Gen AI) to summarise a sample of public submissions made to an external Parliamentary Joint Committee inquiry, looking into audit and consultancy firms

        In the final assessment ASIC assessors generally agreed that AI outputs could potentially create more work if used (in current state), due to the need to fact check outputs, or because the original source material actually presented information better. The assessments showed that one of the most significant issues with the model was its limited ability to pick-up the nuance or context required to analyse submissions.

        The duration of the PoC was relatively short and allowed limited time for optimisation of the LLM.

        So basically this study concludes that Llama2-70B with basic prompting is not as good as humans at summarizing documents submitted to the Australian government by businesses, and its summaries are not good enough to be useful for that purpose. But there are some pretty significant caveats here, most notably the relative weakness of the model they used (I like Llama2-70B because I can run it locally on my computer but it’s definitely a lot dumber than ChatGPT), and how summarization of government/business documents is likely a harder and less forgiving task than some other things you might want a generated summary of.

      • xep@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        The services I use, Kagi’s autosummarizer and DeepL, haven’t done that when I’ve checked. The downside of the summarizer is that it might remove some subtle things sometimes that I’d have liked it to keep. I imagine that would occur if I had a human summarize too, though. DeepL has been very accurate.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          LLMs are especially bad for summarization for the use case of presenting search results. The source is just as critical of information for search as the information itself, and LLMs obfuscate this critical source information and combine results from multiple sources together…

      • theunknownmuncher@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Downvoters need to read some peer reviewed studies and not lap up whatever BS comes from OpenAI who are selling you a bogus product lmao. I too was excited for summarization use-case of AI when LLMs were the new shiny toy, until people actually started testing it and got a big reality check

  • Fubarberry@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Playing with it on my own computer, locally hosting it and running it offline, has been pretty cool. I find it really impressive when it’s something open source and community driven. I also think there are a lot of useful applications for things that are traditionally not solvable with traditional programming.

    However a lot of the pushed corporate AI feels not that useful, and there’s something about it that really rubs me the wrong way.

  • m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Personally I use it when I can’t easily find an answer online. I still keep some skepticism about the answers given until I find other sources to corroborate, but in a pinch it works well.

    • empireOfLove2@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      because of the way it’s trained on internet data, large models like ChatGPT can actually work pretty well as a sort of first-line search engine. My girlfriend uses it like that all the time especially for obscure stuff in one of her legal classes, it can bring up the right details to point you towards googling the correct document rather than muddling through really shitty library case page searches.

  • mlegstrong@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I used it a decent amount at my last job to write test reports that had a lot of similar text with minor changes.
    I also use it for dnd to help me quickly make the outlines of side characters & flesh out my world.

  • Xtallll@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I used to spend 1 month a year where all I did was write performance reports on people I supervise. Now I put the facts in let AI write the first draft, do some editing and I’m done in a week.

    • mPony@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I think this speaks more to the usefulness of performance reports than the usefulness of GenAI

  • tomjuggler@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Boilerplate code (the stuff you usually have to copy anyway from GitHub) and summarising long boring articles. That’s the use case for me. Other than that I agree - and having done AI service agent coding myself for fun I can seriously say that I would not trust it to run a business service without a human in the loop