New Instance, Same Grenfur

  • 0 Posts
  • 4 Comments
Joined 2 months ago
cake
Cake day: June 5th, 2025

help-circle
  • No, you literally can see the code, that’s why it’s open source. YOU may not look at it, but people do. Random people, complete strangers, unpaid and un-vested in the project. The alternative is a company, who pays people to say “Yeah it’s totally safe”. That conflict of interest is problematic. Also, depending on what it’s written in, yes, I do sometimes take the time. Perhaps not for every single thing I run, but any time I run across niche projects, I read first. To claim that someone can’t understand is wild. That’s a stranger on the internet, you’re knowledge of their expertise is 0.

    In practice, 1,000 random people with no reason to “trust you, bro” on the internet being able to audit every change you make to your code is far more trustworthy than a handful of people paid by the company they represent. What’s worse, is that if Microsoft were to have a breach, then like maybe 10 people on the planet know about it. 10 people with jobs, mortgages, and families tied to that knowledge. They won’t say shit, because they can’t lose that paycheck. Compare that to say the XZ backdoor where the source is available and gets announced so people know exactly who what and where to resolve the issue.


  • Most of the options mentioned in this thread won’t act independent of your input. You’d need some kind of automation software. n8n has a community edition that you can host locally in a docker container. You can link it to an LLM API and emails, excel sheets etc. As for doing “online jobs” I’m not sure what that means, but at the point where you’re trying to get a single AI to interact with the web and make choices on it’s own, you’re basically left coding it all yourself in python.



  • Not entirely sure what you mean by “Limitation Free”, but here goes.

    First thing you need is a way to actually run a LLM. For me I’ve used both Ollama and Koboldcpp.

    Ollama is really easy to set up and has it’s own library of models to pull from. It’s a CLI interface, but if all you’re wanting is a locally hosted AI to ask silly questions to, that’s the one. Something of note for any locally hosted LLM, they’re all dated. So none of them can tell you about things like local events. They’re data is current as of when the model was trained. Generally a year or longer ago. If you wanted up to date news you could use something like DDGS and write a python script that calls Ollama. At any rate.

    Koboldcpp. If your “limitation free” is more spice roleplay, this is the better option. It’s a bit more work to get going, but has tons of options to let you tweak how your models run. You can find .gguf models at Hugging Face, load em up and off you go. kobold’s UI is kinda mid, and though is more granular than ollama, if you’re really looking to dive into some kinda role play or fantasy trope laden adventure, SillyTavern has a great UI for that and makes managing character cards easier. Note that ST is just a front end, and still needs Koboldcpp (or another back end) running for it to work.

    Models. Your “processing power” is almost irrelevant for LLMs. Its your GPUs VRAM that matters. A general rule of thumb is to pick a model that has a download size 2-4GB smaller than your available VRAM. If you got 24G VRAM, you can probably run a model that’s 22G in download (Roughly a 32B Model depending on the quant).

    Final notes, I could have misunderstood and this whole question was about image gen, hah. InvokeAI is good for that. Models can be found on CivitAI (Careful it’s… wild). I’ve also heard good things about ComfyUI but never used it.

    GL out there.