Hey everyone, I’m interested in using a local LLM (Language Model) on a Linux system to create a long story, but I’m not sure where to start. Does anyone have experience with this or know of any resources that could help me get started? I’d love to hear your tips and suggestions. Thanks!

  • Dojan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    The open source LLMs are really capable, I think the method used to feed the plot might be the more important part in making this work.

  • kby@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 months ago

    You can try setting up Ollama on your RPi, then use a highly-quantized variant of the Mistral model (or quantize it yourself with GGUF+llama.cpp). You can do some very heavy quantization (2-bit), which will increase the error rate. But if you are only planning to use the generated text as a starting point, it might be useful nevertheless. Also see: https://github.com/ollama/ollama/blob/main/docs/import.md#importing-pytorch--safetensors

    Here are some pre-quantized variants of Mistral 7B: https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF

    (all the tools and models I have mentioned in my comment are free and open-source, and beyond that, require no uplink during operation)

  • Ziggurat@sh.itjust.works
    link
    fedilink
    Français
    arrow-up
    0
    ·
    9 months ago

    Have you tried GPT4All https://gpt4all.io/index.html ? It runs on CPU so is a bit slow, but it’s a way to run various LLM locally with an plug and play, easy to use solution. That said, LLM are huge, and perform better on GPU, provided you have a GPU big enough. Here is the trap. How much do you want to spend in a GPU ?

    • PeterPoopshit@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 months ago

      If you get just the right gguf model (read the description when you download them to get the right K-optimization or whatever it’s called) and actually use multithreading (llamacpp supports multithreading so in theory gpt4all should too), then it’s reasonably fast. I’ve achieved roughly half the speed of ChatGPT just on a 8 core amd fx with ddr3 ram.

    • kindenough@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      On GPU it is okay. GTX-1080 with a R5 3700X.

      It has just written a 24 page tourist info booklet about the town I live in and a bunch of it is very inaccurate or outdated on the places to go. Fun and impressive anyway. Took only a few minutes.

  • Thavron@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    Are you looking to make an easy buck by generating novels and self publishing them on Amazon?