• andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    As a paid, captive squirrel, focusing on spinning my workout wheel and getting my nuts at the end of the day, I hate that AI is mostly a (very expensive) solution in search of a problem. I am being told “you must use AI, find a way to use it” but my AI successes are very few and mostly non-repeatable (my current AI use case is: “try it once for non-vital, not time-sensitive stuff, if at first you don’t succeed, just give up, if you succeed, you saved some time for more important stuff”).

    If I try to think as a CEO or an entrepreneur, though, I sort of see where these people might be coming from. They see AI as the new “internet”, something that for good or bad is getting ingrained in everything we do and that will cause your company to go bankrupt for trying too hard to do things “the new way” but also to quickly fade to irrelevance if you keep doing things in the same way.

    It’s easy, with the benefit of hindsight, to say now “haha, Blockbuster could have bought Netflix for $50 Millions and now they are out of business”, but all these people who have seen it happen are seeing AI as the new disruptive technology that can spell great success or complete doom for their current businesses. All hype? Maybe. But if I was a CEO I’d be probably sweating too (and having a couple of VPs at my company wipe up the sweat with dollar bills)

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        There are use cases for AI. There are none for NFTs.

        One use case is whenever you need to produce some inane bullshit that nobody is probably going to read anyway, but it’s still required for some reason. Like cover letters.

        Now, you might argue that we should work towards a society where we don’t have to produce this inane bullshit that nobody’s going to read anyway, and I would agree with you. But as long as we’re here, we might as well offload this pointless labor onto a pointless labor-saving machine.

        • deathbird@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          29 days ago

          Kindly disagree. People actually read cover letters, and a cryptographically secured entry on a pubic ledger has some conceivable use.

          • GnuLinuxDude@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            29 days ago

            So much spam… internet is hardly usable after a decade of SEO and now with LLM sprinkled on top.

    • willington@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      My use case for AI is to get it to tell me water to cereal ratios, like for rice, oatmal, corn meal. If there is a mistake, I can easily control for it, and it’s a decent enough starting point.

      That said, I am just being lazy by avoiding taking my own notes. I can easily make my own list of water to cereal ratios to hang on the fridge.

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        29 days ago

        Yeah, so far all of the cooking stuff I got from ChatGPT were things that I could’ve found on my own if I had searched better. It will give you a recipe that is edible. It will have the same 4-6 spices as every other recipe and it will require a can of tomatoes. These are all savory dishes; I assume there are a different set of spices and no tomatoes if it’s sweet, but I haven’t tested that theory.

        It gets old quick.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      I’m working in a small software development company. We’re exploring AI. It’s not being pushed without foundation.

      There’s no need to commit when you don’t even know what you’re committing to, disregarding cost and risk. It just doesn’t make sense. We should expect better from CEOs than emotionally following a fear of missing out without a reasonable assessment.

  • mctoasterson@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    28 days ago

    “It enabled us to shit out products in 4 days.”

    Glad they incorporated such thorough testing in their process.

  • doctortofu@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    You have to use AI! For what? I dunno, figure it out or you’re fired! <- a genius businessman, apparently…

    This blind lemming-like rush towards AI that so many CEOs seem to suffer from seriously resembles cult behavior or severe drug addiction, my god…

    • 5too@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      29 days ago

      AI will now supplement all interactions with the genius businessman

    • HejMedDig@feddit.dk
      link
      fedilink
      English
      arrow-up
      0
      ·
      30 days ago

      They are so hung up on replacing employees with AI, but they don’t know how, so they force the employees to use AI, in the hope that the employee will teach the AI how to replace them

    • PattyMcB@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      Off-topic, but not-so-fun fact: lemmings don’t actually follow each other off cliffs in mass suicide events. The people filming the documentary actually scared and chased them to get them to panic and do that.

      Horrible, I know

      • doctortofu@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        True and completely right, but I lack the vocabulary to replace it with something more accurate and still evocative enough :)

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      30 days ago

      AI always tells them what they want to hear and will make up sources ad infinitum. So unless you step outside of that bubble and search on your own, you would never know.

  • Dekkia@this.doesnotcut.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    CEO of enterprise-software powerhouse IgniteTech.

    Can someone tell me me what they do? They don’t have a Wikipedia Article and their website is mostly AI slop.

      • chaosCruiser@futurology.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        30 days ago

        After grilling their silly LLM for a while, I was able to squeeze out what that company really is all about. They don’t really make anything. They just buy miscellaneous software companies, and turn those apps into subscriptions based cloud cancer. Enterprise software meets maximum enshittification, yeah baby!

          • anomnom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            29 days ago

            I think only bankruptcy is the net positive, as long as they don’t stiff legitimate creditors.

            • NaibofTabr@infosec.pub
              link
              fedilink
              English
              arrow-up
              0
              ·
              29 days ago

              No don’t you see - fewer employees means there’s less of anything getting done, and this company is just a parasite that produces nothing of value.

    • SkunkWorkz@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      30 days ago

      Enterprise software is basically software tools that aide in running a large organization. When a company has grown too big to use Excel to manage their information streams they buy such software.

  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    replacing nearly 80% of staff

    So the title seems a little misleading. Maybe even clickbaity? 😉

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    “Vaughan was surprised to find it was often the technical staff, not marketing or sales, who dug in their heels.”

    So the people that understood it best were sceptical, and this didn’t give him pause.

    Can someone explain to me why all these empty suits dick ride LLMs so hard?

    • Benaaasaaas@group.lt
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      Because they try the tools, realize that their job is pretty much covered by LLMs and think it’s the same for everyone.

    • MysteriousSophon21@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      Technical staff were skeptical because they actually know what AI can and can’t do reliably in production environments - it’s good at generating content but terrible at logical reasoning and mission-critical tasks that require consistancy.

      • medem@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        …which is why I categorically refuse to use the term Artificial intelligence .

      • End-Stage-Ligma@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        29 days ago

        it’s good at generating content but terrible at logical reasoning and mission-critical tasks that require consistency.

        Thank goodness nobody is crusading to have AI take over medicine.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      Can someone explain to me why all these empty suits dick ride LLMs so hard?

      $$$$$$$

      AIs are cheaper than humans.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        Not really. Because they don’t work and then they have to hire more humans which is more expensive than just keeping them on for the 6 months it’ll take for the CEOs to realise that.

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    One guy is like “Friday is forced AI ‘training’ day” (as if one must ‘train’ to write prompts. Using natural language rather than a unique language or syntax and trusting the computer to make a comprehensible and accurate output is the whole point), and then he has the gall to claim “turns out people hate learning!”

    • Ronno@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      Writing prompts is definitely a thing users must learn to do properly, to get the right results.

      But anyways, any company that fires people in favor of AI is only digging their own grave anyways. I personally believe AI (of which LLM is only a small part) can definitely serve as an automation tool that can increase output. Great companies will use this tech to give their employees more time to work on things that are meaningful to the company, that the AI cannot do. For instance, a company could free up some time of highly skilled engineers to help a couple hours a week on the most complicated service desk issues to increase customer satisfaction. Or the LLM can create more time for sales to have meetings with customers, instead of doing admin they already hate, etc… Use it to grow, not to shrink.

      Besides, if your company can be completely run by AI anyways, then congratulations, you just reached the end goal of open sourcing your company. Because why the heck won’t anyone be able to replicate that quickly?

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        Besides, if your company can be completely run by AI anyways, then congratulations, you just reached the end goal of open sourcing your company. Because why the heck won’t anyone be able to replicate that quickly?

        Yeah that’s the thing these tech bros never seem to understand. It’s obviously not going to work because if it did work it would have already been done by somebody else, it’s called the Law Of Mediocrity. It’s simply requires the base assumption that you are not the smartest person in the universe, which of course is where it all falls down, because they always assume they are.

  • beemikeoak@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    One little thing AI can’t do is probably the reason why I also use AI with caution. I use it for all the bullshit emails and communication I have to keep doing just to stay employed. But there’s this one little trick it can’t do. Sure it can summarize a resume or a book or give me the equation to calculated the size of Pythagora’ss triangular dick. But the one little thing it really can’t do is thinking. AI can’t think and come up with original content. It can only mimic and regurgitate old ideas and thoughts, not new ones.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        I may not have a lot of respect for some of my co-workers, and frankly a lump of lard would be an improvement, bit even the most useless human can out think an AI when it comes to anything slightly out of the box.

        My nephew got in a bit of trouble at school a while ago because he answered a question “write a sentence containing the word 'why”." With “Why?” You can ask an AI dozen times the same thing and it will always just do the obvious thing, it’ll never be original. He’s six and he can outthink an AI.

    • aceshigh@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      You can ask it to synthesis information. More specially you can ask it to compare the data and ask to give examples of other things that share the same attributes as the other 2. Also, if I’m painting I sometimes ask about color, but I just got a color wheel.

  • CaptPretentious@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Today, I ran into a bug. We’re being encouraged to use AI more so I asked copilot why it failed. I asked without really looking at the code. I tried multiple times and all AI could say was ‘yep it shouldn’t do that’ but didn’t tell me why. So, gave up on copilot and looked at the code. It took me less than a minute to find the problem.

    It was a switch statement and the case statement had (not real values) what basically reads as ’ variable’ == ‘caseA’ or ‘caseB’. Which will return true… Which is the bug. Like I’m stripping a bunch of stuff away but co-pilot couldn’t figure out that the case statement was bad.

    AI is quickly becoming the biggest red flag. Fast slop, is still slop.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      AI thinks in the same way that ants think, there’s no real intelligence or thought going on but ants are still able to build complex logistics chains by following simple rules, although AI works on completely different principles the effect is the same, it’s following a lot of simple rules that lead to something that looks like intelligence.

      The problem is a lot of people seem to think that AIs are genuinely simulations of a brain, they think the AI is genuinely conjugating because they kind of look like they do sometimes. The world is never going to get taken over by a mindless zombie AI. If we ever do get AGI it won’t be from LLMs that’s for sure.

    • KumaSudosa@feddit.dk
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      I do find AI useful when I’m debugging a large SQL / Python script though and gotta say I make use of it in that case… other than that it’s useless and relying on it as ones main tool is idiotic

  • jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    30 days ago

    “The marketing and salespeople were enthused by the possibilities of working with these new tools, he added.”

    https://youtu.be/KHJbSvidohg#t=13s

    I see the same push where I work and I cannot get a good answer to the most basic question:

    “Why?”

    “We want more people using AI.”

    “Why?”

    “. . .”

    • fluxion@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      30 days ago

      Same reason as forcing people back into the office even though it’s the solution to a number of serious issues affecting society:

      Investors/banks have tons of money in these markets and are incentivizing/forces companies to adopt these policies to prop up the markets, whether it is in their interest or not.

      • jordanlund@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        30 days ago

        Oh, yeah, we have that too… we want people in the office because collaboration! Synergy! etc. etc.

        “How does that work if you want everyone using AI?”

        “. . .”

    • Almacca@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      “The marketing and salespeople were enthused by the possibilities of working with these new tools, he added.”

      [sigh] Because of course they do. Those people couldn’t find their own arses even if they used both hands.

    • r00ty@kbin.life
      link
      fedilink
      arrow-up
      0
      ·
      30 days ago

      I usually ignore these kind of trends. Just meet any required deadlines etc but don’t engage too much. The vast majority will just disappear.

      Specifically as a software developer I cannot see a good outcome from engaging with this trend either. It’s going to go one of two ways.

      1: It pans out sooner rather than later that AI wasn’t the panacea they thought it was, and it either is forgotten about, or becomes a set of realized tools we use, but don’t rely on.

      2: They believe it can replace us all, and so they replace us all with freshly graduated vibe “programmers” and I don’t have a job anyway.

      I don’t really see an upside to engaging with this in any kind of long term plan.

      • AllNewTypeFace@leminal.space
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        30 days ago

        2. It’s about breaking the power of tech workers by reducing them from highly skilled specialists to interchangeable low-status workers whose job is to clean up botshit until it compiles. (Given that the machine does the real work and they’re just tidying up the output it generates when prompted, they naturally don’t merit high wages or indulgent perks, even if getting 30,000 lines of code regurgitated from the mashed-up contents of Github and Stack Overflow working is more cognitively tasking than writing that code from scratch would have been.)

        • Aceticon@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          29 days ago

          It doesn’t matter what they claim if they simply can’t get the people to babysit the AI codebase or the AIs for less money than the original ones who didn’t have to deal with AIs and their output used to cost.

          As a pretty senior dev who spent a lot of my career as a contractor mainly coming in to unfuck code-bases seriously fucked up by a couple of cycles under less experienced people, if I was pitched work to unfuck AI work I would demand a premium for my services purelly because of it being far more more fucked up in far harder to follow ways than the work done by less experience humans (who at least are consistent in the mistakes they make and follow a specific pattern in how they work) even without any moral considerations (on principle I would probably just not take a contract with a company that had used AI like that).

          I mean, I can see their strategy work for junior devs, but that kind of reducing the power of specialized workers was already being done against junior devs using “outsourcing”.

      • jordanlund@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        30 days ago

        My prediction is that it’s just the latest buzzword on the pile of buzzwords and by 2028 a new one will pop up and the only time you hear “AI” will be in the line of “Hey, remember when everyone was talking about AI?”

        Before AI it was “The Cloud”. Before the cloud it was “Virtualization”. They’re saying all the same things about AI that they said about the cloud and virtualization…

        I guess the real money is inventing the new buzzword that sales people can say will make your business faster, more agile, and more efficient. :)

        • Honytawk@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          30 days ago

          The Cloud is still a thing though. As is virtualization

          And AI (LLMs, media generation, machine learning) are going to stay a thing as well.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            29 days ago

            Yeah, there’s generally a kernel of value wrapped up in all sorts of bullshit.

            Some with the .com boom, obviously here we are with internet as a critical infrastructure, but 1999 ‘internet’ was a mess of overhype.

        • Aceticon@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          29 days ago

          Everytime I see this kind of hype pop up I think back to when there was this great announcement from Silicon Valley about a “revolution in transportation” and it turned out to be the Segway.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          30 days ago

          Before AI it was “The Cloud”. Before the cloud it was “Virtualization”. They’re saying all the same things about AI that they said about the cloud and virtualization…

          So you’re saying AI will make a measurable (arguably net positive) impact and forever change the way we do things in our day to day, just becoming a standard toolset offered by many providers? Because I’d argue that’s what virtualization was, as well as the cloud to a lesser extent. Hell, I’d be hard pressed to be convinced on virtualization being a bad thing (not as much the cloud tho, that has some solid negative arguments).

          If you’re trying to shit talk AI, you’d be better off comparing it to block chain than cloud/virtualization, since the latter two are an integral part of a large amount of the work we do, and the former is mainly for illicit drugs/activities and stealing money.

          • acosmichippo@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            29 days ago

            agreed, virtualization was one of the best things to happen in IT since the dawn of the internet. i can’t even imagine how much less efficient and reliable datacenters and the entire internet would be without it. Not at all comparable to AI.

            i actually work for a company that does very little virtualization now and it’s fucking awful.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            29 days ago

            I think the comparison is apt, it’s not that LLM is useless, it’s just that, currently, it’s insanely overhyped. Just like the .com had irrational companies that evaporated but underlying tech was useful. Just like in-house servers were considered to be dead with everything being cloud hosted, now there’s recognition of a trade off. Just like there was pressure to ship everything as an ‘.ova’ and nowadays that’s not really done as much.

            An appropriately used level of LLM might even be nice, but when it’s fuel for the grifters, it is going to be obnoxious.

        • r00ty@kbin.life
          link
          fedilink
          arrow-up
          0
          ·
          30 days ago

          I think it’s a real shame because all three of those things you mention are useful. The problem is that once they become a buzzword, then everything needs to be done using that buzzword.

          Cloud has been misused to hell and back, and I have no doubt AI will too.

    • r00ty@kbin.life
      link
      fedilink
      arrow-up
      0
      ·
      30 days ago

      A lot of the large(ish) corporates are moving in this direction, including where I work. It’s not unusual, I always liken large organizations to insects, just following where the others are going, and what they are doing. They don’t really ever put much thought into their actions.

  • krimson@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    “Doing Our Part to Make the World a Greener Place”

    Clown company. You can’t promote AI and do a claim like that at the same time.

    • Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      By accelerating the collapse of a human survivable ecosystem we will bring about the end of humanity, resulting in a greener environment for the handful of surviving species.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 days ago

        By accelerating the collapse ofpivoting a human survivable ecosystem we will bring about the endaccelerate a paradigm shift of humanity, resulting in a greener environment for the handfulstable base of surviving species. recurring revenue.