Been using Perplexity AI quite a bit lately for random queries, like travel suggestions.
So I started wondering what random things people are using it for to help with daily tasks. Do you use it more than Google/etc?
Also if anyone is paying for Pro versions? Thinking if it’s worth it paying for Perplexity AI Pro or not.
I’ll typically only use it for language and coding problems.
Synonyms, word for xyz, how can I make this sentence more clear.
But if I can’t find anything on Google I’ll ask it other questions.
ChatGPT Plus and Github Copilot…but less every day. They just don’t keep up enough with current APIs and are often confused and unable to actually provide useful solutions.
I mostly use ChatGPT Plus as a Google replacement nowadays. And Copilot as a, sadly, mostly useless autocomplete.
- Proofread/rewrite emails and messages
- Recipes
- Find specs for computers, gadgets, cars etc.
- Compare products
- Troubleshoot software issues
- Find meaning of idioms
- Video game guide/walkthrough/reviews
- Summarise articles
- Find out if a website is legit (and ownership of the sites)
I don’t see any need for Pro versions. ChatGPT 4 is already available for free via Bing. I simply use multiple AI tools and compare the results. (Copilot / Gemini / Claude / Perplexity)
Replaced forums like Stack for me both could give me incorrect information, one doesn’t care how dumb my questions are.
My job pays from premium, and it’s been useful clearing up certain issues I’ve had with tutorials for the current language I’m learning. In an IDE CO-Pilot can get a bit in the way and its suggestions aren’t as good as they once were, but I’ve got the settings down to where it’s a fancy spell check and synergises well vim motions to bang out some lines.
It’s only replaced the basic interactions I would have had without having to wait for responses or having a thread ignored.
Nothing. I’m a software developer, but don’t use any AI tools with any regularity. I think I only asked ChatGPT or similar something once about programming because the documentation was awful, but I do remember that as having been helpful.
The only thing that might be close, though not directly, is translation software (kanji be hard).
The only thing that might be close, though not directly, is translation software (kanji be hard).
Well that’s the dirty little open secret, isn’t it? These “AI” programs are just beefier versions of the same kinds of translation, predictive text, “smart” image editing, and chatbot software we’ve had for a while. Significantly more sophisticated and more powerful, but not exactly new. That’s why “AI” is suddenly appearing everywhere: in many cases, a less sophisticated predecessor of it was already there, they just didn’t use the marketing language OpenAI popularized.
I legit had a spelling and grammar checking add-on that rebranded itself to “AI”, and it did absolutely nothing different than what it already did.
And the whole point is that absolutely none of this is “AI” in any meaningful way. It’s like when that company tried to brand their new skateboard/segway things from a few years ago as “hoverboards”. You didn’t achieve the thing, you’re just reducing what the term means to make it apply to your new thing.
I stopped using perplexity only used it briefly. Chatgpt? Open ai specifically?
Lots of things.
To generate AI friend conversational ai character back story’s. Bc sometimes they have to be long include lots of info.
To summarize reddit posts asking for advice. You know sometimes ppl make them longer then need be. It summarizes them for me when I’m lazy
Reframe verbiage
Just a few
I tried to use it to find me a decent phone under $500 and half of the listed options were $900+ so uhh… Not too useful.
Tbf, I think chat GLT’s Internet dump is a few years old. So maybe it recommends a banger of a phone from 2020 or so and the pricing data is now garbage.
IPhone 13 was listed, and while it’s a good phone, it’s uhh… Not $500
There’s now a privacy-respecting offer on DDG, use the !ai bang to get to it.
To answer your question, any “natural language” query of modest importance, where asking a question like “will there be any more movies in that series by this director?” is easier than checking the usual movies websites.
How does that work? You just type that before your query and it gives you ai answers?
Yup, there’s many of them. I use DDG all the time, but this feature probably works in other search engines too - I just don’t know
From any search bar configured to use DDG, just type !ai followed by your query
You can do this from the DDG website too of course. Other useful “bangs” include !w for Wikipedia, or !aw for the Arch Linux wiki, or even just !img for image search.
DDG is just using ChatGPT
I don’t like the idea of wasting energy on inefficient things so I don’t use “AI”.
A.I. use is directly responsible for carbon emissions from non-renewable electricity and for the consumption of millions of gallons of fresh water, and it indirectly boosts impacts from building and maintaining the power-hungry equipment on which A.I. runs.
As Use of A.I. Soars, So Does the Energy and Water It Requires
I’m using local models. Why pay somebody else or hand them my data?
- Sometimes you need to search for something and it’s impossible because of SEO, however you word it. A LLM won’t necessarily give you a useful answer, but it’ll at least take your query at face value, and usually tell you some context around your question that’ll make web search easier, should you decide to look further.
- Sometimes you need to troubleshoot something unobvious, and using a local LLM is the most straightforward option.
- Using a LLM in scripts adds a semantic layer to whatever you’re trying to automate: you can process a large number of small files in a way that’s hard to script, as it depends on what’s inside.
- Some put together a LLM, a speech-to-text model, a text-to-speech model and function calling to make an assistant that can do something you tell it without touching your computer. Sounds like plenty of work to make it work together, but I may try that later.
- Some use RAG to query large amounts of information. I think it’s a hopeless struggle, and the real solution is an architecture other than a variation of Transformer/SSM: it should address real-time learning, long-term memory and agency properly.
- Some use LLMs as editor-integrated coding assistants. Never tried anything like that yet (I do ask coding questions sometimes though), but I’m going to at some point. The 8B version of LLaMA 3 should be good and quick enough.
Got any links teaching how to run a self hosted RAG LLM?
Never ran RAG, so unfortunately no. But there’re quite a few projects doing the necessary handling already - I’d expect them to have manuals.
Got any links to those please?
- https://github.com/neuml/txtai
- https://docs.llamaindex.ai/en/stable/
- https://docs.gpt4all.io/gpt4all_chat.html#localdocs-plugin-chat-with-your-data
- https://github.com/atisharma/llama_farm
- https://github.com/jonfairbanks/local-rag
- https://docs.useanything.com/
- https://python.langchain.com/docs/get_started/introduction
- https://docs.neum.ai/get-started/introduction
Thank you. The “jonfairbanks” github repo is exactly what I was looking for, because FUCK sending any of my data to an AI company using their APIs for them to ingest my information to sell off to others.
You are the best!
I’ve only used ChatGPT and it’s mostly good for language-related tasks. I use it for finding tip-of-my-tongue words or completing/paraphrasing sentences. Basically fancy autocorrect. It’s also good at debugging stuff sometimes when the language itself doesn’t give useful errors (looking at you sql). Other than that, any time I’ve asked for factual information it’s been wrong in some way or simply not helpful.
I’ve tried paid versions of ChatGPT, Claude, and Gemini. I am currently using Gemini, and it is working reasonably well for me.
I mostly use it to replace searches. I haven’t used Google in years, but mainly relied on DuckDuckGo until SEO made it less useful. My secondary use case is for programming. I tend to jump around to a lot of different languages and frameworks, and it’s hugely helpful to get sample code describing what I want to do when I don’t know the syntax.
Once in a great while, I will have it rewrite something for me. That is mostly for inspiration if I want to change the tone of something I wrote (then I’ll edit). I think that all of the LLMs suck at writing.
I dont.
Yeah I’ve used it occasionally to goof around with and try to get silly answers. And I’ve occasionally used it when stuck on an idea to try to get something useful out of it…the latter wasn’t too successful.
Quite frankly I don’t at all understand how anyone could possibly be using this stuff daily. The average consumer doesn’t have a need imo.
Daily? Only speech-to-text.
I don’t use it for daily tasks. I’ve been tinkering around with local LLMs for recreation. Roleplay, being my dungeon master in a text adventure. Telling it to be my “waifu”. Or generating amateur short stories. At some time I’d like to practice my foreign language skills with it.
I haven’t had good success with tasks that rely on “correctness” or factual information. However sometimes I have it draft an email for me or come up with an argumentation for a text that I’m writing. That happens every other week, not daily. And I generously edit and restructure it afterwards or just incorporate some of the paragraphs into my final result.
D&D related things actually seems like a decent use case. For most other things I don’t understand how people find it useful enough to find use cases to do daily tasks with it.
Agree. I’ve tried some of the use-cases that other people mentioned here. Like summarization, “online” search, tech troubleshooting, recipes, … And all I’ve had were sub-par results and things that needed extensive fact-checking and reworking. So I can’t really relate to those experiences. I wouldn’t use AI as of now for tasks like that.
And this is how I ended up with fiction and roleplay. Seems to be better suited for that. And somehow AI can do small coding tasks. Like writing boiler-plate code and help with some of the more tedious tasks. At some point I need to feed another of my real-life problems to the current version of ChatGPT but I don’t think it’ll do it for me. And it can come up with nice ideas for stories. Unguided storywriting will get dull in my experience. I guess the roleplaying is nice, though.