• FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Unless you specify that you want a talented output. A lot of people don’t realize that you need to tell AIs what kind of output you want them to give you, if you don’t then they’ll default to something average. That’s the cause of a lot of disappointment with tools like ChatGPT.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Literally yes. You’ll see that OpenAI’s system prompts say ‘please’ and Anthropic’s mentions that helping users makes the AI happy.

          Which makes complete sense if you understand what’s going on with how the models actually work and not the common “Markov chain” garbage armchair experts spout off (the self attention mechanism violates the Markov property characterizing Markov chains in the first place, so if you see people refer to transformers as Markov chains either they don’t know what they are taking about or they think you need an oversimplified explanation).

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I always love watching you comment something that’s literally true regarding LLMs but against the groupthink and get downvoted to hell.

        Clearly people aren’t aware that the pretraining pass is necessarily a regression to the mean and it requires biasing it using either prompt context or a fine tuning pass towards excellence in outputs.

        There’s a bit of irony to humans shitting on ChatGPT for spouting nonsense when so many people online happily spout BS that they think they know but don’t actually know.

        Of course a language model trained on the Internet ends up being confidently incorrect. It’s just a mirror of the human tendencies.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          Yeah, these AIs are literally trying to give us what they “think” we expect them to respond with.

          Which does make me a little worried given how frequently our fictional AIs end up in “kill all humans!” Mode. :)

      • Late2TheParty@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Interestingly enough, I’ve started adding something similar to my AI prompts because a Lemmy user had it in theirs and I liked their images.

      • Spuddlesv2@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Ahhh so the secret to using ChatGPT successfully is to tell it to give you good output?

        Like “make sure the code actually works” and “don’t repeat yourself like a fucking idiot” and “don’t hallucinate false information”!

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 months ago

          Absolutely, it’s one of the first curious things you discover when using them, such as stable diffusion “masterpiece” or the famous system prompt leaks from proprietary llms

          It makes sense in how it works but in proprietary use it is mostly handled for you

          Finding the right words and amount is a hilarious exercise that provides pretty good insight in the attention mechanics

          Consider the “let’s work step by step”

          This proved a revolutionary way to system the coders as they then will structure the output better, there’s then more research that happened around why this is so amazingly effective at making the model proof check itself

          Predictions are obviously closely related to the action part of our brains as well, so it makes sense that it would help when you think about it

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 months ago

          Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster’s qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.

          But it’s still not perfect, obviously. It doesn’t make it stop hallucinating.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            Yeah, you still need to give an AI’s output an editing and review pass, especially if factual accuracy is important. But though some may mock the term “prompt engineering” there really are a bunch of tactics you can use when talking to an AI to get it to do a much better job. The most amusing one I’ve come across is that some AIs will produce better results if you offer to tip them $100 for a good output, even though there’s no way to physically fulfill such a promise. The theory is that the AI’s training data tended to have better stuff associated with situations where people paid for it, so when you tell the AI you’re willing to pay it’ll effectively go “ah, the user is expecting good quality.”

            You shouldn’t have to worry about the really quirky stuff like that unless you’re an AI power-user, but a simple request for high-quality output can go a long way. Assuming you want high quality output. You could also ask an AI for a “cheesy low-quality high-school essay riddled with malapropisms” on a subject, for example, and that would be a different sort of deviation from “average.”

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Literally yes.

          For example about a year ago one of the multi step prompt papers that improved results a bit had the model guess what expert would be best equipped to answer the question in the first pass and then asked it to answer the question as that expert in the second pass and it did a better job than trying to answer it directly.

          The pretraining is a regression towards the mean, so you need to bias it back towards excellence with either fine tuning or in context learning.

  • coffinwood@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Add “, yet” to the headline and come back in a year or two.

    Currently AI may fail to produce a video game, but so was the case for images, videos, and texts only a few years ago.

    Failure is a good thing because it’s preceded by attempt.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Yeah. Just because it can’t do it now, doesn’t mean it won’t ever. And also refer to my other comment for how this is a bad study as they didn’t even provide any details on the game itself, let alone release the game. But anyone can do a similar study for themselves at home, since AI is free to use!

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    The game will not be released to the public as it was just a research project, and Keywords didn’t provide any additional information about what type of 2D game it created.

    So we just have to trust them on this? Yeah, no.

  • yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    This is a quote that should end in ‘yet’. I am very confident in saying there will be an AAA game released that will be designed and implemented 95%+ by a machine. I am less confident in providing a timeline. If you consider the history of machine learning is ~70 years old (in one sense, one can argue other dates) and you plot the advances from tic-tac-toe to what machines can do today (chess being a prime example), it doesn’t take much vision to see that it won’t be but a matter of time before this is a real thing.

    • Trollception@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Sure it may produce a game but much of what makes a game good is making it fun and memorable. If we can eventually create a general AI then absolutely I think such a thing is possible. Otherwise it will be a copypasta mishmash and having a cohesive and fluent design is a huge if.

  • Damage@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    “House made entirely of cement is a failure because you still need doors and windows and stuff.”

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Just like self driving! In 2010 it was almost there, just needed a few more years…

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Go see videos of how well FSD V12 performs and you’re up for a surprise. Full self driving sucks untill it doesn’t. AIDRIVR puts up good content if you want recommendations.

      • realharo@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Yes actually (except more than a few years).

        Waymo is already operating a robotaxi service in 3 cities, now they just need to expand and find a way to make it not lose money.

          • realharo@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Until what? 100% replacement of human-driven cars? Being rolled out for areas covering 50% of the population? Where is the goal line here?

            We are already at the stage of commercial operation, with rides available to the general public - even though only in a few locations.

            Sure, it’s far from being everywhere, but why pretend that progress has stalled, when it clearly hasn’t?

            • owen@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 months ago

              My point is that the ‘give it a few more years’ mantra gets repeated for decades

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I really don’t think there’s more examples of optimistic predictions than there are pessimistic ones.

        The discoveries made in recent years definitely point to an emergent incredibly useful set of tools that it would be amiss to pretend wouldn’t eventually replace junior developers in different disciplines. It’s just that without juniors there will never be any seniors. And someone needs to babysit those juniors. So what we get is not something that can replace an entire workforce in a long long while even if top brass would love that

  • Kissaki@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I am astonished by an established, commercial website having good structure.

    It looks like a documentation website. Sidebar with clear categories and navigation. I really like it.

  • Kissaki@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    The article doesn’t say much. So I checked the source for more information. It doesn’t say much more, but IMO in a much better way, in two concise paragraphs. In the sourced financial report, it is in the intro, two paragraphs:

    An example R&D initiative, sponsored by the Innovation team was Project Ava, where a team, initially from Electric Square Malta, attempted to create a 2D game solely using Gen AI. Over the six-month process, the team shared their findings across the Group, highlighting where Gen AI has the potential to augment the game development process, and where it lags behind. Whilst the project team started small, it identified over 400 tools, evaluating and utilising those with the best potential. Despite this, we ultimately utilised bench resource from seven different game development studios as part of the project, as the tooling was unable to replace talent.

    One of the key learnings was that whilst Gen AI may simplify or accelerate certain processes, the best results and quality needed can only be achieved by experts in their field utilising Gen AI as a new, powerful tool in their creative process. As a research project, the game will not be released to the public, but has been an excellent initiative to rapidly spread tangible learnings across the Group, provide insights to clients and it demonstrates the power and level of cross-studio collaboration that currently exists. Alongside Project Ava, the team is undertaking a range of Gen AI R&D projects, including around 3D assets, to ensure that we are able to provide current insights in an ever- evolving part of the market


    The central quote and conclusion being:

    One of the key learnings was that whilst Gen AI may simplify or accelerate certain processes, the best results and quality needed can only be achieved by experts in their field utilising Gen AI as a new, powerful tool in their creative process.

    Which is obvious and expected for anyone familiar with the technology. Of course, experiments and confirming expectations has value too. And I’m certain actually using tools and finding out which ones they can use where is very useful to them specifically.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Honestly it sounds extremely generous by saying the best results can be achieved by experts with GenAI. In my opinion the best results can be achieved without it entirely.

    • 0xD@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      The overall point may be relatively obvious, but the details are not.

      Which steps of which processes is it good at, and which not? What can be easily integrated into existing tooling? Where is is best completely skipped?

  • systemglitch@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I look forward to the day it can make a fully functioning game. The best games will mostly be AI created eventually

        • stratoscaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          The reason that your favorite games are your favorite is because they aren’t soulless cash grabs. They’re made by people with imagination, passion, and ingenuity. AI simply can’t create something brand new from existing parts, it can only give it a fresh coat of paint.

          Furthermore, AI will always work like this, because that’s how the models are trained. I don’t think we’ll have a model that learns to create on its own within any of our lifetimes, if ever.

        • Andy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 months ago

          I don’t doubt that AI tools can be used to make great games, but I think part of the reason so many people disagree with you is because:

          1. You claim “The best games will mostly be AI created eventually”, and I think most people question on what basis you think that AI will produce overall better quality. If you said that it’s faster, or can allow indie studios to complete with AAA, that makes sense. Attributing quality to it – at this stage – seems odd.
          2. It’s unlikely, imo, that the best games will be created by AI as opposed to with AI.

          I think using AI throughout the process so that one person can achieve the productivity of a whole team is a credible vision. But to say that games will created “By AI” implies that a generative AI engine will generate the code de novo to a complete game. Which I think is already possible, but it will be very, very hard for such a system to innovate newer games. Because currently, these tools rely on replicating features in their training, so their ability to create quests that match a new genre or to generate dialogue that is funny in the context of the story is going to be very impaired.

          By and large, I think current evidence shows that Human-AI cooperation almost always improves upon AI performance alone, and this is particularly the case when creating things for humans to enjoy.

  • IsThisAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Folks really didn’t understand how AI will work. It’s not going to be some big we’re dropping 1000 people.

    It’s going to reduce demand over time.

    • Pyr_Pressure@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      It will start with going from 5 writers to 3, or going from 10 animators to 6.

      Then 10 years from now as it gets more advanced we will be down to maybe 1 writer and 2 animators.

      • QuaternionsRock@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        going from 10 animators to 6

        It’s still crazy to me that like half of Across the Spider-Verse was AI generated

    • dariusj18@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I’ve heard it as “No one is losing their job to AI, but they will lose their jobs to someone who is using AI.”

      • Semi-Hemi-Lemmygod@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Case in point: I’m using ChatGPT to help me write cover letters. I make sure to proofread them and sometimes it hallucinates my expertise, but it makes it a lot faster.

        • ObsidianZed@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          I mean that’s already happening at some big companies now.

          Will it last? My guess is no, but they’ll enjoy saving the money that they would pay human beings in the mean time.

          My hope is just that they’ll suffer losses due to a drop in product quality and start struggling, but let’s face it, the big tech companies are almost never the ones’ that are actually hurt by their decisions.

      • smackjack@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Think of AI like computers and spreadsheet software in the early 80s. I bet a lot of accountants were pretty freaked out about what this new technology was going to mean for their jobs.

        Did technology replace those accountants? No, but companies probably didn’t need as many accountants as they did before. AI will likely reduce the number of programmers that a company needs, but it won’t eliminate them

    • deur@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      Folks really don’t understand how AI will work. It’s not going to be some big we’re dropping 1000 people.

    • mods_are_assholes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      But that’s not how corporations view it because the people making the decision aren’t tec people, but beancounters.

      Some slick but ignorant C-suite gets the bright idea that AI is The Way and makes the call to lay off a bunch of people.

      I BET that is what Hasbro is thinking for DnD, and I am absolutely certain some of their recent content is AI, and that’s why they canned most of the real people involved.

      • IsThisAnAI@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        You’ve just said all corporations view it this way. Think about what you said. You have taken articles from a few businesses and applied that across the board.

        • mods_are_assholes@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Think about what you said.

          Fuck you and your sanctimonious assumptions, I know what I wrote.

          You have taken articles from a few businesses and applied that across the board.

          No, I was there last year in august when they were CAUGHT using AI art in books, and more recently in their online ads.

          If I could write to you like I want, I would get banned, but you will NEVER be called out for the slander you so love to engage in.

    • kameecoding@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      And in that regard it’s no different than any other productivity tool or automation, I have seen software being bought that immediately Eliminated 80 odd jobs.

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    “Replacing Talent” is not what AI is meant for, yet, it seems to be every penny-pinching, bean counting studio’s long term goal with it.

        • FiniteBanjo@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 months ago

          Not really, no, all of the current models built to intended scale are selling it as a product, especially OpenAI, Microsoft, and Google. It was built with a purpose and that purpose was to potentially replace expensive human assets.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Yes, it was. Like all scientific discoveries several corporations started building proprietary products. You are wrong that it was built with that purpose.

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I’m not a developer, but I use AI tools at work (mostly LLMs).

        You need to treat AI like a junior intern… You give it a task, but you still need to check the output and use critical thinking. You cant just take some work from an intern, blindly incorporate it into your presentation, and then blame the intern if the work is shoddy…

        AI should be a time saver for certain tasks. It cannot (currently) replace a good worker.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          I am a developer and that’s exactly how I see it too. I think AI will be able to write PRs for simple stories but it will need a human to review those stories to give approval or feedback for it to fix it, or manually intervene to tweak the output.

        • Rickety Thudds@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          It’s clutch for boring emails with several tedious document summaries. Sometimes I get a day’s work done in 4 hours.

          Automation can be great, when it comes from the bottom-up.

          • isles@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Honestly, that’s been my favorite - bringing in automation tech to help me in low-tech industries (almost all corporate-type office jobs). When I started my current role, I was working consistently 50 hours a week. I slowly automated almost all the processes and now usually work about 2-3 hours a day with the same outputs. The trick is to not increase outputs or that becomes the new baseline expectation.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          8 months ago

          As a developer I use it mainly for learning.

          What used to be a Google followed by skimming a few articles or docs pages is now a question.

          It pulls the specific info I need, sources it and allows follow up questions.

          I’ve noticed the new juniors can get up to speed on new tech very quickly nowadays.

          As for code I don’t trust it beyond snippets I can use as a base.

          • FiniteBanjo@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            8 months ago

            JFC they’ve certainly got the unethical shills out in full force today. Language Models do not and will never amount to proper human work. It’s almost always a net negative everywhere it is used, final products considered.

              • FiniteBanjo@lemmy.today
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 months ago

                Its intended use is to replace human work in exchange for lower accuracy. There is no ethical use case scenario.

                • Lmaydev@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  8 months ago

                  It’s intended to show case its ability to generate text. How people use it is up to them.

                  As I said it’s great for learning as it’s very accurate when summarising articles / docs. It even sources it so you can read up more if needed.

        • time_fo_that@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          I saw this the other day and I’m like well fuck might as well go to trade school before it gets saturated like what happened with tech in the last couple years.

          • Defaced@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Yeah, the sad thing about Devin AI is that they’re clearly doing it for the money, they have absolutely no intentions on bettering humanity, they just want to build this up and sell it off for that fat entrepreneur paycheck. If they really cared about bettering humanity they would open it up to everyone, but they’re only accepting inquiries from businesses.

      • assassinatedbyCIA@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        The problem is the crazy valuations of AI companies is based on it replacing talent and soon. Supplementing talent is far less exciting and far less profitable.

      • Rentlar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I do think given time, AI can improve to the level that it can do nearly all of the same things junior level people in many different sectors can.

        The problem and unfortunate thing for companies I forsee is that it can’t turn juniors into seniors if the AI “replaces” juniors, which means that company will run out of seniors with retirement or will have to pay piles and piles of cash for people just to hire the few non-AI people left with industry knowledge to babysit the AIs.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Current AI*

        I don’t see any reason to expect this to be the case indefinitely. It has been getting better all the time and lately been doing so at a quite rapid pace. In my view it’s just a matter of time untill it surpasses human capabilities. It can already do so in specific narrow fields. Once we reach AGI all bets are off.

        • thundermoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.

          However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.

          A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.

          AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            How does this amazing prediction engine discovery that basically works like our brain does not fit in a larger solution?

            The way emergent world simulation can be found in the larger models definitely point to this being a cornerstone, as it provides functional value in both image and text recall.

            Nevermid that tools like memgpt doesn’t satisfy long term memory and context windows doesn’t satisfy attention functions properly, I need a much harder sell on LLM technology not proving an important piece of agi

            • thundermoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              8 months ago

              I didn’t say it wasn’t amazing nor that it couldn’t be a component in a larger solution but I don’t think LLMs work like our brains and I think the current trend of more tokens/parameters/training LLMs is a dead-end. They’re simulating the language area of human brains, sure, but there’s no reasoning or understanding in an LLM.

              In most cases, the responses from well-trained models are great, but you can pretty easily see the cracks when you spend extended time with them on a topic. You’ll start to get oddly inconsistent answers the longer the conversation goes and the more branches you take. The best fit line (it’s a crude metaphor, but I don’t think it’s wrong) starts fitting less and less well until the conversation completely falls apart. That’s generally called “hallucination” but I’m not a fan of that because it implies a lot about the model that isn’t really true. Y

              You may have already read this, but if you haven’t: Steven Wolfram wrote a great overview of how GPT works that isn’t too technical. There’s also a great sci-fi novel from 2006 called Blindsight that explores the way facsimiles of intelligence can be had without consciousness or even understanding and I’ve found it to be a really interesting way to think about LLMs.

              It’s possible to build a really good Chinese room that can pass the Turing test, and I think LLMs are exactly that. More tokens/parameters/training aren’t going to change that, they’ll just make them better Chinese rooms.

              • KeenFlame@feddit.nu
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 months ago

                Thanks, I’ll check those out. The entire point of your comment was that llm is a dead end. The branching as you call it is just more parameters which approach, in lower token models a collapse. Which is why more tokens and larger context does improve accuracy and why it does make sense to increase them. LLMs have also proven to in some cases have what you call reason and what many call reason but which is not a good word for the error. Larger models provide a way to stimulate the world which in turn gives us access to the sensing mechanism of our brain, which is to stimulate and then attend to disparages between the simulation and actual. This in turn gives access to action which unfortunately is not very well understood. Simulation, or prediction, is what our brains constantly do to be able to react and adapt to the world without massive timing failure and massive energy cost, for instance consider driving where you focus on unusual sensing and let action be an extension of purpose by just allowing constant prediction to happen where your muscles have already prepared to commit even precise movements due to enough practice with your “model” of how wheel and foot apply to the vehicle.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Yeah LLMs might very well be a dead-end when it comes to AGI but just like chatGPT seemingly came out of nowhere and took the world by surprise, this might just aswell be the case with an actual AGI aswell. My comment doesn’t really make any claims about the timescale of it but rather just tires to point out the inevitability of it.

      • Altima NEO@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Not even that, it’s a tool. Like the same way Photoshop, or 3ds max are tools . You still need the talent to use the tools.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago
      sed “s/studio’s/tech industry c-suite’s/“
      

      As an engineer, the amount of non-engineering idiots in tech corporate leadership trying to apply inappropriate technical solutions to something because it became a buzzword is just absurdly high.