I think AI is neat.
So… LLMS are… teenagers?
If an LLM is just regurgitating information in a learned pattern and therefore it isn’t real intelligence, I have really bad news for ~80% of people.
Been destroyed for this opinion here. Not many practicioners here just laymen and mostly techbros in this field… But maybe I haven’t found the right node?
I’m into local diffusion models and open source llms only, not into the megacorp stuff
All the stuff on the dbzero instance is pro open source and pro piracy so fairly anti corpo and not tech illiterate
Thanks, I’ll join in
If anything people really need to start experimenting beyond talking to it like its human or in a few years we will end up with a huge ai-illiterate population.
I’ve had someone fight me stubbornly talking about local llms as “a overhyped downloadable chatbot app” and saying the people on fossai are just a bunch of ai worshipping fools.
I was like tell me you now absolutely nothing you are talking about by pretending to know everything.
But the thing is it’s really fun and exciting to work with, the open source community is extremely nice and helpful, one of the most non toxic fields I have dabbled in! It’s very fun to test parameters tools and write code chains to try different stuff and it’s come a long way, it’s rewarding too because you get really fun responses
Aren’t the open source LLMs still censored though? I read someone make an off-hand comment that one of the big ones (OLLAMA or something?) was censored past version 1 so you couldn’t ask it to tell you how to make meth?
I don’t wanna make meth but if OSS LLMs are being censored already it makes having a local one pretty fucking pointless, no? You may as well just use ChatGPT. Pray tell me your thoughts?
No there are many uncensored ones as well
Could be legal issues, if an llm tells you how to make meth but gets a step or two wrong and results in your death, might be a case for the family to sue.
But i also don’t know what all you mean when you say censorship.
But i also don’t know what all you mean when you say censorship.
It was literally just that. The commentor I saw said something like "it’s censored after ver 1 so don’t expect it to tell you how to cook meth.
But when I hear the word “censored” I think of all the stuff ChatGPT refuses to talk about. It won’t write jokes about protected groups and VAST swathes of stuff around it. Like asking it to define “fag-got” can make it cough and refuse even though it’s a British food-stuff.
Blocking anything sexual - so no romantic/erotica novel writing.
The latest complaint about ChatGPT is it’s laziness which I can’t help feeling is due to over-zealous censorship. Censorship doesn’t just block the specific things but entirely innocent things (see fag-got above).
Want help writing a book about Hilter beoing seduced by a Jewish woman and BDSM scenes? No chance. No talking about Hitler, sex, Jewish people or BDSM. That’s censorship.
I’m using these as examples - I’ve no real interest in these but I am affected by annoyances and having to reword requests because they’ve been mis-interpreted as touching on censored subjects.
Just take a look at r/ChatGPT and you’ll see endless posts by people complaining they triggered it’s censorship over asinine prompts.
Oh ok, then yea that’s a problem, any censorship that’s not directly related to liability issues should be nipped in the bud.
Depends who and how the model was made. Llama is a meta product and its genuinely really powerful (i wonder where zuckerberg gets all the data for it)
Because its powerful you see many people use it as a starting point to develop their own ai ideas and systems. But its not the only decent open source model and the innovation that work for one model often work for all others so it doesn’t matter in the end.
Every single model used now will be completely outdated and forgotten in a year or 2. Even gpt4 en geminni
Holy crap didnt expect him to admit it this soon:
Zuck Brags About How Much of Your Facebook, Instagram Posts Will Power His AI
Have you ever considered you might be, you know, wrong?
No sorry you’re definitely 100% correct. You hold a well-reasoned, evidenced scientific opinion, you just haven’t found the right node yet.
Perhaps a mental gymnastics node would suit sir better? One without all us laymen and tech bros clogging up the place.
Or you could create your own instance populated by AIs where you can debate them about the origins of consciousness until androids dream of electric sheep?
Do you even understand my viewpoint?
Why only personal attacks and nothing else?
You obviously have hate issues, which is exactly why I have a problem with techbros explaining why llms suck.
They haven’t researched them or understood how they work.
It’s a fucking incredibly fast developing new science.
Nobody understands how it works.
It’s so silly to pretend to know how bad it works when people working with them daily discover new ways the technology surprises us. Idiotic to be pessimistic about such a field.
You obviously have hate issues
Says the person who starts chucking out insults the second they get downvoted.
From what I gather, anyone that disagrees with you is a tech bro with issues, which is quite pathetic to the point that it barely warrants a response but here goes…
I think I understand your viewpoint. You like playing around with AI models and have bought into the hype so much that you’ve completely failed to consider their limitations.
People do understand how they work; it’s clever mathematics. The tech is amazing and will no doubt bring numerous positive applications for humanity, but there’s no need to go around making outlandish claims like they understand or reason in the same way living beings do.
You consider intelligence to be nothing more than parroting which is, quite frankly, dangerous thinking and says a lot about your reductionist worldview.
You may redefine the word “understanding” and attribute it to an algorithm if you wish, but myself and others are allowed to disagree. No rigorous evidence currently exists that we can replicate any aspect of consciousness using a neural network alone.
You say pessimistic, I say realistic.
Haha it’s pure nonsense. Just do a little digging instead of doing the exact guesstimation I am talking about. You obviously don’t understand the field
Once again not offering any sort of valid retort, just claiming anyone that disagrees with you doesn’t understand the field.
I suggest you take a cursory look at how to argue in good faith, learn some maths and maybe look into how neural networks are developed. Then study some neuroscience and how much we comprehend the brain and maybe then we can resume the discussion.
You attack my viewpoint, but misunderstood it. I corrected you. Now you tell me I am wrong with my viewpoint (I am not btw) and start going down the idiotic path of bad faith conversation, while strawman arguing your own bad faith accusation, only because you are butthurt that you didn’t understand. Childish approach.
You don’t understand, because no expert currently understands these things completely. It’s pure nonsense defecation coming out of your mouth
You don’t rwally have one lol. You’ve read too many pop-sci articles from AI proponents and haven’t understood any of the underlying tech.
All your retorts boil down to copying my arguments because you seem to be incapable of original thought. Therefore it’s not surprising you believe neural networks are approaching sentience and consider imitation to be the same as intelligence.
You seem to think there’s something mystical about neural networks but there is not, just layers of complexity that are difficult for humans to unpick.
You argue like a religious nutjob or Trump supporter. At this point it seems you don’t understand basic logic or how the scientific method works.
I once ran an LLM locally using Kobold AI. Said thing has an option to show the alternative tokens for each token it puts out, and what their probably for being chosen was. Seeing this shattered the illusion that these things are really intelligent for me. There’s at least one more thing we need to figure out before we can build an AI that is actually intelligent.
It’s cool what statistics can do, though.
That’s actually pretty neat. I tried Kobold AI a few months ago but the novelty wore off quickly. You made me curious, I’m going to check out that option once I get home. Is it just a toggleable opyiont option or do you have to mess with some hidden settings?
Just as I was about to give up, it somehow worked: https://imgchest.com/p/9p4ne9m9m4n I didn’t really do anything different this time around, so no idea why it didn’t work at first.
They’re predicting the next word without any concept of right or wrong, there is no intelligence there. And it shows the second they start hallucinating.
…yeah dude. Hence ARTIFICIAL intelligence.
There aren’t any cherries in artificial cherry flavoring either 🤷♀️ and nobody is claiming there is
They are a bit like you’d take just the creative writing center of a human brain. So they are like one part of a human mind without sentience or understanding or long term memory. Just the creative part, even though they are mediocre at being creative atm. But it’s shocking because we kind of expected that to be the last part of human minds to be able to be replicated.
Put enough of these “parts” of a human mind together and you might get a proper sentient mind sooner than later.
Exactly. Im not saying its not impressive or even not useful, but one should understand the limitation. For example you can’t reason with an llm in a sense that you could convince it of your reasoning. It will only respond how most people in the used dataset would have responded (obiously simplified)
You repeat your point but there already was agreement that this is how ai is now.
I fear you may have glanced over the second part where he states that once we simulated other parts of the brain things start to look different very quickly.
There do seem to be 2 kind of opinions on ai.
-
those that look at ai in the present compared to a present day human. This seems to be the majority of people overall
-
those that look at ai like a statistic, where it was in the past, what improved it and project within reason how it will start to look soon enough. This is the majority of people that work in the ai industry.
For me a present day is simply practice for what is yet to come. Because if we dont nuke ourselves back to the stone age. Something, currently undefinable, is coming.
I didn’t, I just focused on how it is today. I think it can become very big and threatening but also helpful, but that’s just pure speculation at this point :)
What i fear is AI being used with malicious intent. Corporations that use it for collecting data for example. Or governments just putting everyone in jail that they are told by an ai
I’d expect governments to use it to craft public relation strategies. An extension of what they do now by hiring the smartest sociopaths on the planet. Not sure if this would work but I think so. Basically you train an AI on previous messaging and results from polls or voting. And then you train it to suggest strategies to maximize for support for X. A kind of dumbification of the masses. Of course it’s only going to get shittier from there on out.
-
…or you might not.
It’s fun to think about but we don’t understand the brain enough to extrapolate AIs in their current form to sentience. Even your mention of “parts” of the mind are not clearly defined.
There are so many potential hidden variables. Sometimes I think people need reminding that the brain is the most complex thing in the universe, we don’t full understand it yet and neural networks are just loosely based on the structure of neurons, not an exact replica.
True it’s speculation. But before GPT3 I never imagined AI achieving creativity. No idea how you would do it and I would have said it’s a hard problem or like magic, and poof now it’s a reality. A huge leap in quality driven just by quantity of data and computing. Which was shocking that it’s “so simple” at least in this case.
So that should tell us something. We don’t understand the brain but maybe there isn’t much to understand. The biocomputing hardware is relatively clear how it works and it’s all made out of the same stuff. So it stands to reason that the other parts or function of a brain might also be replicated in similar ways.
Or maybe not. Or we might need a completely different way to organize and train other functions of a mind. Or it might take a much larger increase in speed and memory.
You say maybe there’s not much to understand about the brain but I entirely disagree, it’s the most complex object in the known universe and we haven’t discovered all of it’s secrets yet.
Generating pictures from a vast database of training material is nowhere near comparable.
Ok, again I’m just speculating so I’m not trying to argue. But it’s possible that there are no “mysteries of the brain”, that it’s just irreducible complexity. That it’s just due to the functionality of the synapses and the organization of the number of connections and weights in the brain? Then the brain is like a computer you put a program in. The magic happens with how it’s organized.
And yeah we don’t know how that exactly works for the human brain, but maybe it’s fundamentally unknowable. Maybe there is never going to be a language to describe human consciousness because it’s entirely born out of the complexity of a shit ton of simple things and there is no “rhyme or reason” if you try to understand it. Maybe the closest we get are the models psychology creates.
Then there is fundamentally no difference between painting based on a “vast database of training material” in a human mind and a computer AI. Currently AI generated images is a bit limited in creativity and it’s mediocre but it’s there.
Then it would logically follow that all the other functions of a human brain are similarly “possible” if we train it right and add enough computing power and memory. Without ever knowing the secrets of the human brain. I’d expect the truth somewhere in the middle of those two perspectives.
Another argument in favor of this would be that the human brain evolved through evolution, through random change that was filtered (at least if you do not believe in intelligent design). That means there is no clever organizational structure or something underlying the brain. Just change, test, filter, reproduce. The worst, most complex spaghetti code in the universe. Code written by a moron that can’t be understood. But that means it should also be reproducible by similar means.
Possible, yes. It’s also entirely possible there’s interactions we are yet to discover.
I wouldn’t claim it’s unknowable. Just that there’s little evidence so far to suggest any form of sentience could arise from current machine learning models.
That hypothesis is not verifiable at present as we don’t know the ins and outs of how consciousness arises.
Then it would logically follow that all the other functions of a human brain are similarly “possible” if we train it right and add enough computing power and memory. Without ever knowing the secrets of the human brain. I’d expect the truth somewhere in the middle of those two perspectives.
Lots of things are possible, we use the scientific method to test them not speculative logical arguments.
Functions of the brain
These would need to be defined.
But that means it should also be reproducible by similar means.
Can’t be sure of this… For example, what if quantum interactions are involved in brain activity? How does the grey matter in the brain affect the functioning of neurons? How do the heart/gut affect things? Do cells which aren’t neurons provide any input? Does some aspect of consciousness arise from the very material the brain is made of?
As far as I know all the above are open questions and I’m sure there are many more. But the point is we can’t suggest there is actually rudimentary consciousness in neural networks until we have pinned it down in living things first.
I have a silly little model I made for creating Vogoon poetry. One of the models is fed from Shakespeare. The system works by predicting the next letter rather than the next word (and whitespace is just another letter as far as it’s concerned). Here’s one from the Shakespeare generation:
KING RICHARD II:
Exetery in thine eyes spoke of aid.
Burkey, good my lord, good morrow now: my mother’s said
This is silly nonsense, of course, and for its purpose, that’s fine. That being said, as far as I can tell, “Exetery” is not an English word. Not even one of those made-up English words that Shakespeare created all the time. It’s certainly not in the training dataset. However, it does sound like it might be something Shakespeare pulled out of his ass and expected his audience to understand through context, and that’s interesting.
Wow, sounds amazing, big probs to you! Are you planning on releasing the model? Would be interested tbh :D
Nothing special about it, really. I only followed this TensorFlow tutorial:
https://www.tensorflow.org/text/tutorials/text_generation
The Shakespeare dataset is on there. I also have another mode that uses entries from the Joyce Kilmer Memorial Bad Poetry Contest, and also some of the works of William Topaz McGonagall (who is basically the Tommy Wiseau of 19th century English poetry). The code is the same between them, however.
Nice, thx
Alternatively we could call things what they are. You know, cause if we ever have actual AI we kind of need the term to be intact and not watered down by years of marketing bullshit or whatever else.
There are specific terms for what you’re talking about already. AI is all the ML algorithms that we are integrating into daily life, and AGI is human-level AI able to create it’s own subjective experience.
EXACTLY. there is no problem solving either (except that to calculate the most probable text)
Even worse is some of my friends say that alexa is A.I.
Nobody is claiming there is problem solving in LLMs, and you don’t need problem solving skills to be artificially intelligent. The same way a knife doesn’t have to be a Swiss army knife to be called a “knife.”
I mean, people generally don’t have problem solving skills, yet we call them “intelligent” and “sentient” so…
There’s a lot more to intelligence and sentience than just problem solving. One of them is recalling data and effectively communicating it.
Recalling data, communication. Two things humans are notoriously bad at…
I just realized I interpreted your comment backwards the first time lol. When I wrote that I had “people don’t have issues with problem solving” in my head
Alexa is AI. She’s artificially intelligent. Moreso than an ant or a pigeon, and I’d call those animals pretty smart.
… Alexa literally is A.I.? You mean to say that Alexa isn’t AGI. AI is the taking of inputs and outputting something rational. The first AI’s were just large if-else complications called First Order Logic. Later AI utilized approximate or brute force state calculations such as probabilistic trees or minimax search. AI controls how people’s lines are drawn in popular art programs such as Clip Studio when they use the helping functions. But none of these AI could tell me something new, only what they’re designed to compute.
The term AI is a lot more broad than you think.
The term AI being used by corporations isn’t some protected and explicit categorization. Any software company alive today, selling what they call AI, isn’t being honest about it. It’s a marketing gimmick. The same shit we fall for all the time. “Grass fed” meat products aren’t actually 100% grass fed at all. “Healthy: Fat Free!” foods just replace the fat with sugar and/or corn syrup. Women’s dress sizes are universally inconsistent across all clothing brands in existence.
If you trust a corporation to tell you that their product is exactly what they market it as, you’re only gullible. It’s forgivable. But calling something AI when it’s clearly not, as if the term is so broad it can apply to any old if-else chain of logic, is proof that their marketing worked exactly as intended.
I think there is a difference in definition between us… I would define a proper ai as intelligent meaning they have the ability to problem solve
I still don’t follow your logic. You say that GPT has no ability to problem solve, yet it clearly has the ability to solve problems? Of course it isn’t infallible, but neither is anything else with the ability to solve problems. Can you explain what you mean here in a little more detail.
One of the most difficult problems that AI attempts to solve in the Alexa pipeline is, “What is the desired intent of the received command?” To give an example of the purpose of this question, as well as how Alexa may fail to answer it correctly: I have a smart bulb in a fixture, and I gave it a human name. When I say,” “Alexa, make Mr. Smith white,” one of two things will happen, depending on the current context (probably including previous commands, tone, etc.):
- It will change the color of the smart bulb to white
- It will refuse to answer, assuming that I’m asking it to make a person named Josh… white.
It’s an amusing situation, but also a necessary one: there will always exist contexts in which always selecting one response over the other would be incorrect.
See that’s hard to define. What i mean is things like reasoning and understanding. Let’s take your example as an… Example. Obviously you can’t turn a person white so they probably mean the led. Now you could ask if they meant the led but it’s not critical so let’s just do it and the person will complain if it’s wrong. Thing is yes you can train an ai to act like this but in the end it doesn’t understand what it’s doing, only (maybe) if it did it right ir wrong. Like chat gpt doesn’t understand what it’s saying. It cannot grasp concepts, it can only try to emulate understanding although it doesn’t know how or even what understanding is. In the end it’s just a question of the complexity of the algorithm (cause we are just algorithms too) and i wouldn’t consider current “AI” to be complex enough to be called intelligent
(Sorry if this a bit on the low quality side in terms of readibility and grammer but this was hastily written under a bit of time pressure)
Obviously you can’t turn a person white so they probably mean the led.
This is true, but it still has to distinguish between facetious remarks and genuine commands. If you say, “Alexa, go fuck yourself,” it needs to be able to discern that it should not attempt to act on the input.
Intelligence is a spectrum, not a binary classification. It is roughly proportional to the complexity of the task and the accuracy with which the solution completes the task correctly. It is difficult to quantify these metrics with respect to the task of useful language generation, but at the very least we can say that the complexity is remarkable. It also feels prudent to point out that humans do not know why they do what they do unless they consciously decide to record their decision-making process and act according to the result. In other words, when given the prompt “solve x^2-1=0 for x”, I can instinctively answer “x = {+1, -1}”, but I cannot tell you why I answered this way, as I did not use the quadratic formula in my head. Any attempt to explain my decision process later would be no more than an educated guess, susceptible to similar false justifications and hallucinations that GPT experiences. I haven’t watched it yet, but I think this video may explain what I mean.
Hmm it seems like we have different perspectives. For example i cannot do something i don’t understand, meaning if i do a calculation in my head i can tell you exactly how i got there because i have to think through every step of the process. This starts at something as simple as 9 + 3 wher i have to actively think aboit the calculation, it goes like this in my head: 9 + 3… Take 1 from 3 add it to 9 = 10 + 2 = 12. This also applies to more complex things wich on one hand means i am regularly slower than my peers but i understand more stuff than them.
So i think because of our different… Thinking (?) We both lack a critical part in understanding each other’s view point
Anyhow back to ai.
Intelligence is a spectrum, not a binary classification
Yeah that’s the problem where does the spectrum start… Like i wouldn’t call a virus, bacteria or single cell intelligent, yet somehow a bunch of them is arguing about what intelligence is. i think this is just case of how you define intelligence, wich would vary from person to person. Also, I agree that llms are unfathomably complex. However i wouldn’t calssify them as intelligent, yet. In any case it was an interesting and fun conversation to have but i will end it here and go to sleep. Thanks for having an actual formal disagreement and not just immediately going for insults. Have a great day/night
The term AI is a lot more broad than you think.
That is precisely what I dislike. It’s kinda like calling those crappy scooter thingies “hoverboards”. It’s just a marketing term. I simply oppose the use of “AI” for the weak kinds of AI we have right now and I’d prefer “AI” to only refer to strong AI. Though that is of course not within my power to force upon people and most people seem to not care one bit, so eh 🤷🏼♂️
Is that Summer from Rick and Morty?
AI: “Keep Summer safe”
So super informed OP, tell me how they work. technically, not CEO press release speak. explain the theory.
I’m not OP, and frankly I don’t really disagree with the characterization of ChatGPT as “fancy autocomplete”. But…
I’m still in the process of reading this cover-to-cover, but Chapter 12.2 of Deep Learning: Foundations and Concepts by Bishop and Bishop explains how natural language transformers work, and then has a short section about LLMs. All of this is in the context of a detailed explanation of the fundamentals of deep learning. The book cites the original papers from which it is derived, most of which are on ArXiv. There’s a nice copy on Library Genesis. It requires some multi-variable probability and statistics, and an assload of linear algebra, reviews of which are included.
So obviously when the CEO explains their product they’re going to say anything to make the public accept it. Therefore, their word should not be trusted. However, I think that when AI researchers talk simply about their work, they’re trying to shield people from the mathematical details. Fact of the matter is that behind even a basic AI is a shitload of complicated math.
At least from personal experience, people tend to get really aggressive when I try to explain math concepts to them. So they’re probably assuming based on their experience that you would be better served by some clumsy heuristic explanation.
IMO it is super important for tech-inclined people interested in making the world a better place to learn the fundamentals and limitations of machine learning (what we typically call “AI”) and bring their benefits to the common people. Clearly, these technologies are a boon for the wealthy and powerful, and like always, have been used to fuck over everyone else.
IMO, as it is, AI as a technology has inherent patterns that induce centralization of power, particularly with respect to the requirement of massive datasets, particularly for LLMs, and the requirement to understand mathematical fundamentals that only the wealthy can afford to go to school long enough to learn. However, I still think that we can leverage AI technologies for the common good, particularly by developing open-source alternatives, encouraging the use of open and ethically sourced datasets, and distributing the computing load so that people who can’t afford a fancy TPU can still use AI somehow.
I wrote all this because I think that people dismiss AI because it is “needlessly” complex and therefore bullshit. In my view, it is necessarily complex because of the transformative potential it has. If and only if you can spare the time, then I encourage you to learn about machine learning, particularly deep learning and LLMs.
That’s my point. OP doesn’t know the maths, has probably never implemented any sort of ML, and is smugly confident that people pointing out the flaws in a system generating one token at a time are just parroting some line.
These tools are excellent at manipulating text (factoring in the biases they have, I wouldn’t recommended trying to use one in a multinational corporation in internal communications for example, as they’ll clobber non euro derived culture) where the user controls both input and output.
Help me summarise my report, draft an abstract for my paper, remove jargon from my email, rewrite my email in the form of a numbered question list, analyse my tone here, write 5 similar versions of this action scene I drafted to help me refine it. All excellent.
Teach me something I don’t know (e.g. summarise article, answer question etc?) disaster!
They can summarize articles fairly well
No, they can summarise articles very convincingly! Big difference.
They have no model of what’s important, or truth. Most of the time they probably do ok but unless you go read the article you’ll never know if they left out something critical, hallucinated details, or inverted the truth or falsity of something.
That’s the problem, they’re not an intern they don’t have a human mind. They recognise patterns in articles and patterns in summaries, they non deterministically adjust the patterns in the article towards the patterns in summaries of articles. Do you see the problem? They produce stuff that looks very much like an article summary but do not summarise, there is no intent, no guarantee of truth, in fact no concern for truth at all except what incidentally falls out of the statistical probability wells.
That’s a good way of explaining it. I suppose you’re using a stricter definition of summary than I was.
I think it’s really important to keep in mind the separation between doing a task and producing something which looks like the output of a task when talking about these things. The reason being that their output is tremendously convincing regardless of its accuracy, and given that writing text is something we only see human minds do it’s so easy to ascribe intent behind the emission of the model that we have no reason to believe is there.
Amazingly it turns out that often merely producing something which looks like the output of a task apparently accidentally accomplishes the task on the way. I have no idea why merely predicting the next plausible word can mean that the model emits something similar to what I would write down if I tried to summarise an article! That’s fascinating! but because it isn’t actually setting out to do that there’s no guarantee it did that and if I don’t check the output will be indistinguishable to me because that’s what the models are built to do above all else.
So I think that’s why we to keep them in closed loops with person -> model -> person, and explaining why and intuiting if a particularly application is potentially dangerous or not is hard if we don’t maintain a clear separation between the different processes driving human vs llm text output.
You are so extremely outdated in your understanding, For one that attacks others for not implementing their own llm
They are so far beyond the point you are discussing atm. Look at autogen and memgpt approaches, the way agent networks can solve and develop way beyond that point we were years ago.
It really does not matter if you implement your own llm
Then stay out of the loop for half a year
It turned out that it’s quite useless to debate the parrot catchphrase, because all intelligence is parroting
It’s just not useful to pretend they only “guess” what a summary of an article is
They don’t. It’s not how they work and you should know that if you made one
Fact of the matter is that behind even a basic AI is a shitload of complicated math.
Depending on how simple something can be to be considered an AI, the math is surprisingly simple compared to what an average person might expect. The theory behind it took a good amount of effort to develop, but to make something like a basic image categorizer (eg. optical character recognition) you really just need some matrix multiplication and calculating derivatives-- non-math-major college math type stuff.
Come on… It’s not impressive to just not be aware of where the bar is for most people. No, it’s not complex math but you are debating people that read headlines only and then go fully into imagination of what it says
you really just need some matrix multiplication and calculating derivatives-- non-math-major college math type stuff.
Well sure you don’t need a math degree for that, but most people really need to put some time into those topics. I.e., that kind of math is complex enough to constitute a barrier to entry into the field, particularly people with no free time to self-study or money for school.
Said differently: matrix math and basic calculus is hard, just not for you and I.
Point taken
Keep seething, OpenAI’s LLMs will never achieve AGI that will replace people
Buddy, nobody ever said it would
Keep seething
Keep projecting
Next you’ll tell me that the enemies that I face in video games arent real AI either!
That was never the goal… You might as well say that a bowling ball will never be effectively used to play golf.
I agree, but it’s so annoying when you work as IT and your non-IT boss thinks AI is the solution to every problem.
At my previous work I had to explain to my boss at least once a month why we can’t have AI diagnosing patients (at a dental clinic) or reading scans or proposing dental plans… It was maddening.
I find that these LLMs are great tools for a professional. So no, you still need the professional but it is handy if an ai would say, please check these places. A tool, not a replacemenrt.
That was never the goal…
Most CEOs seem to not have got the memo…
Unfortunately the majority of people are idiots who just do this in real life, parroting populous ideology without understanding anything more than the proper catchphrase du jour. And there are many employed professionals who are paid to read a script, or output mundane marketing content, or any “content”. And for that, LLMs are great.
It’s the elevator operator of technology as applied to creative writers. Instead of “hey intern, write the next article about 25 things these idiots need to buy and make sure 90% of them are from our sponsors” it goes to AI. The writer was never going to purchase a few different types of each product category, blindly test them and write a real article. They are just shilling crap they are paid to shill making it look “organic” because many humans are too stupid to not know it’s a giant paid for ad.
Reminds me of this meme I saw somewhere around here the other week
I love that this is clearly from a computer science course.
Well, college is already dry enough as it is, you gotta appreciate it when your instructor has a sense of humor.
It’s from when we used generic algorithms
You’ve just described most people…
P-Zombies, all of them. I happen to be the only one to actually exist. What are the odds, right? But it’s true.
It figures you’d say it, it’s probably your algorithm trying to mess up with my mind!
Knowing that LLMs are just “parroting” is one of the first steps to implementing them in safe, effective ways where they can actually provide value.
The next step is to understand much more and not get stuck on the most popular semantic trap
Then you can begin your journey man
There are so, so many llm chains that do way more than parrot. it’s just the last popular talking point
I think a better way to view it is that it’s a search engine that works on the word level of granularity. When library indexing systems were invented they allowed us to look up knowledge at the book level. Search engines allowed look ups at the document level. LLMs allow lookups at the word level, meaning all previously transcribed human knowledge can be synthesized into a response. That’s huge, and where it becomes extra huge is that it can also pull on programming knowledge allowing it to meta program and perform complex tasks accurately. You can also hook them up with external APIs so they can do more tasks. What we have is basically a program that can write itself based on the entire corpus of human knowledge, and that will have a tremendous impact.
LLMs definitely provide value its just debatable whether they’re real AI or not. I believe they’re going to be shoved in a round hole regardless.
I just love this meme-template… 😅