Sign up to Brilliant and you'll also get 20% off an annual premium subscription: https://brilliant.org/tldr/Last week, Nvidia revealed another massive increa...
Nah but once the put AI on everything bubble bursts companies will have a sour taste on it and won’t be so interested in investing into it.
I believe we’ll get a lot of good improvements over it but in people’s minds AI will be that weird thing that never worked quite right. It’ll be another meme like Cortana on windows so it won’t drive stock price at all unless you’re doing something really cutting edge.
How has it helped you personally in every day life?
And if it’s doing some of your job with prompts that anybody could write, should you be paid less, or should you be replaced by someone juggling several positions?
Your question sounds like a trap but I found a bunch of uses for it.
Rewriting emails
Learning quickly how to get popular business software to do stuff
Wherever I used to use a search engine
Setup study sessions on a topic I knew very little about. I scan the text. Read it. Give it to the AI/LLM. Discuss the text. Have it quiz me. Then move to the next page.
Used it at a poorly documented art collection to track down pieces.
Basically everything I know about baking. If you are curious my posts document the last 7 months or so of my progress.
Built a software driver (a task I hate) almost completely by giving it the documentation
Set it up so it can make practice tests for my daughters school work
Explored a wide range of topics
Now go ahead and point out that I could have done all this myself with just Google, the way we did back in the day. That’s the thing about this stuff. You can always make an argument that some new thing is bad by pointing out it is solving problems that were already solved or solving problems no one cares about. Whenever I get yelled at or hear people complain about opposite things I know that they just want to be angry and they have no argument. It’s just rage full throwing things at the wall to see what sticks.
You can always make an argument that some new thing is bad by pointing out it is solving problems that were already solved or solving problems no one cares about.
That’s not the issue. I’m not a luddite. The issue is that you can’t rely on its answers. The accuracy varies wildly. If you trust it implicitly there’s no way of telling what you end up with. Human learning process normally involves comparing information to previous information, some process of vetting, during which your brain “muscles” are exercised so they become better at it all the time. It’s like being fed in bed and never getting out to do anything by yourself, and to top it off you don’t even know if you’re being fed correct information.
Cough… Wikipedia…cough. You remember being told how Wikipedia wasn’t accurate and the only true sources were books made by private companies that no one could correct?
Human learning process normally involves comparing information to previous information, some process of vetting, during which your brain “muscles” are exercised so they become better at it all the time. It’s l
Argument from weakness. Classic luddite move. I am old enough to remember the fears that internet search engines would do this.
In any case no one is forcing you to use it. I am sure if you called up Britianica and told them to send you a set they would be happy to.
I’m using LLMs to parse and organize information in my file directory, turning bank receipts into json files, I automatically rename downloaded movies into a more legible format I prefer, I summarize clickbaity-youtube-videos, I use copilot on vscode to code much faster, chatGPT all the time to discover new libraries and cut fast through boilerplate, I have a personal assistant that has access to a lot of metrics about my life: meditation streak, when I do exercise, the status of my system etc and helps me make decisions…
I don’t know about you but I feel like I’m living in an age of wonder
I’m not sure what to say about the prompts, I feel like I’m integrating AI in my systems to automate mundane stuff and oversee more information, I think one should be paid for the work and value produced
Improving but to what end? If it’s not something that the public will ultimately perceive as useful it will tank no matter how hard it’s pushed.
I saw a quote that went something like, “I want AI to do my laundry so I can have time for my art, not to do art while I keep doing laundry”.
Art vs laundry is an extreme example but the gist of it is that it should focus on practical applications of the mundane sort. It’s interesting that it can make passable art but ultimately it’s mediocre and meaningless.
We referred to the dotcom bubble as the dotcom bubble, but that didn’t mean that the web went away, it just meant that companies randomly tried stuff and had money thrown at them because the investors had no idea either.
So same here, AI bubble because it’s being randomly attempted without particular vision with lots and lots of money, not because the technology fundamentally is a bust.
Some of it is a fad that will go away. Like you indicated, we’re in the “Marketing throws everything at the wall” phase. Soon we’ll be in the “see what sticks” phase. That stuff will hang around and improve, but until we get there we get AI in all conceivable forms whether they’re a worthwhile use of technology or not.
it’s a fad in terms of the hype and the superstition.
it won’t go away. it will just become boring and mostly a business to business concern that is invisible to the end consumer. just like every other big fad of the past 20 years. ‘big data’, ‘crypto’, etc.
5 years ago everyone was suddenly a ‘data scientist’. where are they now? yeah… exactly.
Yeah, right now the loudest voices are either “AI is ready to do everything right now or in a few months” or “This AI thing is worthless garbage” (both in practice refer to LLM specifically, even though they just say “AI”, the rest of AI field is pretty “boringly” accepted right now). There’s not a whole lot of attention given to more nuanced takes on what it realistically can/will be able to do or not do. With proponents glossing over the limitations and detractors pretending that every single use of LLM is telling people to eat rocks and glue.
Yeah, I’m super salty about the hype because if I had to pick one side or the other, I’d be on team “AI is worthless”, but that’s just because I’d rather try convincing a bunch of skeptics that when used wisely, AI/ML can be super useful, than to try talk some sense into the AI fanatics. It’s a shame though, because I feel like the longer the bubble takes to pop, the more harm actual AI research will receive
Ai isn’t the bubble, that’ll keep on improving, although probably not at this rate.
The hype bubble is companies adding AI to their product where it offers very little, if any, added value, which is incredibly tedious.
The latter bubble can burst, and we’ll all be better for it. But generative AI isn’t going anywhere.
Nah but once the put AI on everything bubble bursts companies will have a sour taste on it and won’t be so interested in investing into it.
I believe we’ll get a lot of good improvements over it but in people’s minds AI will be that weird thing that never worked quite right. It’ll be another meme like Cortana on windows so it won’t drive stock price at all unless you’re doing something really cutting edge.
And good luck competing with the tech giants
This
AI is actually providing value and advancing to a huge rate, I don’t know how people can dismiss that so easily
How has it helped you personally in every day life?
And if it’s doing some of your job with prompts that anybody could write, should you be paid less, or should you be replaced by someone juggling several positions?
Your question sounds like a trap but I found a bunch of uses for it.
Now go ahead and point out that I could have done all this myself with just Google, the way we did back in the day. That’s the thing about this stuff. You can always make an argument that some new thing is bad by pointing out it is solving problems that were already solved or solving problems no one cares about. Whenever I get yelled at or hear people complain about opposite things I know that they just want to be angry and they have no argument. It’s just rage full throwing things at the wall to see what sticks.
That’s not the issue. I’m not a luddite. The issue is that you can’t rely on its answers. The accuracy varies wildly. If you trust it implicitly there’s no way of telling what you end up with. Human learning process normally involves comparing information to previous information, some process of vetting, during which your brain “muscles” are exercised so they become better at it all the time. It’s like being fed in bed and never getting out to do anything by yourself, and to top it off you don’t even know if you’re being fed correct information.
Cough… Wikipedia…cough. You remember being told how Wikipedia wasn’t accurate and the only true sources were books made by private companies that no one could correct?
Argument from weakness. Classic luddite move. I am old enough to remember the fears that internet search engines would do this.
In any case no one is forcing you to use it. I am sure if you called up Britianica and told them to send you a set they would be happy to.
I’m using LLMs to parse and organize information in my file directory, turning bank receipts into json files, I automatically rename downloaded movies into a more legible format I prefer, I summarize clickbaity-youtube-videos, I use copilot on vscode to code much faster, chatGPT all the time to discover new libraries and cut fast through boilerplate, I have a personal assistant that has access to a lot of metrics about my life: meditation streak, when I do exercise, the status of my system etc and helps me make decisions…
I don’t know about you but I feel like I’m living in an age of wonder
I’m not sure what to say about the prompts, I feel like I’m integrating AI in my systems to automate mundane stuff and oversee more information, I think one should be paid for the work and value produced
Improving but to what end? If it’s not something that the public will ultimately perceive as useful it will tank no matter how hard it’s pushed.
I saw a quote that went something like, “I want AI to do my laundry so I can have time for my art, not to do art while I keep doing laundry”.
Art vs laundry is an extreme example but the gist of it is that it should focus on practical applications of the mundane sort. It’s interesting that it can make passable art but ultimately it’s mediocre and meaningless.
We referred to the dotcom bubble as the dotcom bubble, but that didn’t mean that the web went away, it just meant that companies randomly tried stuff and had money thrown at them because the investors had no idea either.
So same here, AI bubble because it’s being randomly attempted without particular vision with lots and lots of money, not because the technology fundamentally is a bust.
That’s a good thing to put it in perspective, yeah. The amount of people who think AI is just a fad that will go away is staggering.
Some of it is a fad that will go away. Like you indicated, we’re in the “Marketing throws everything at the wall” phase. Soon we’ll be in the “see what sticks” phase. That stuff will hang around and improve, but until we get there we get AI in all conceivable forms whether they’re a worthwhile use of technology or not.
it’s a fad in terms of the hype and the superstition.
it won’t go away. it will just become boring and mostly a business to business concern that is invisible to the end consumer. just like every other big fad of the past 20 years. ‘big data’, ‘crypto’, etc.
5 years ago everyone was suddenly a ‘data scientist’. where are they now? yeah… exactly.
Yeah, right now the loudest voices are either “AI is ready to do everything right now or in a few months” or “This AI thing is worthless garbage” (both in practice refer to LLM specifically, even though they just say “AI”, the rest of AI field is pretty “boringly” accepted right now). There’s not a whole lot of attention given to more nuanced takes on what it realistically can/will be able to do or not do. With proponents glossing over the limitations and detractors pretending that every single use of LLM is telling people to eat rocks and glue.
Yeah, I’m super salty about the hype because if I had to pick one side or the other, I’d be on team “AI is worthless”, but that’s just because I’d rather try convincing a bunch of skeptics that when used wisely, AI/ML can be super useful, than to try talk some sense into the AI fanatics. It’s a shame though, because I feel like the longer the bubble takes to pop, the more harm actual AI research will receive