Hey everyone, I’m interested in using a local LLM (Language Model) on a Linux system to create a long story, but I’m not sure where to start. Does anyone have experience with this or know of any resources that could help me get started? I’d love to hear your tips and suggestions. Thanks!
The open source LLMs are really capable, I think the method used to feed the plot might be the more important part in making this work.
You can try setting up Ollama on your RPi, then use a highly-quantized variant of the Mistral model (or quantize it yourself with GGUF+llama.cpp). You can do some very heavy quantization (2-bit), which will increase the error rate. But if you are only planning to use the generated text as a starting point, it might be useful nevertheless. Also see: https://github.com/ollama/ollama/blob/main/docs/import.md#importing-pytorch--safetensors
Here are some pre-quantized variants of Mistral 7B: https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF
(all the tools and models I have mentioned in my comment are free and open-source, and beyond that, require no uplink during operation)
I found this blog post were the author tries to use chat GPT to generate theatre manuscript/narrative. It’s based on the paper “Co-Writing Screenplays and Theatre Scripts with Language Models: An Evaluation by Industry Professionals”. In the blog post they outline their narrative generation procedure in this chart:
I also found this GitHub repo with links to more resources on this topic.
Have you tried GPT4All https://gpt4all.io/index.html ? It runs on CPU so is a bit slow, but it’s a way to run various LLM locally with an plug and play, easy to use solution. That said, LLM are huge, and perform better on GPU, provided you have a GPU big enough. Here is the trap. How much do you want to spend in a GPU ?
If you get just the right gguf model (read the description when you download them to get the right K-optimization or whatever it’s called) and actually use multithreading (llamacpp supports multithreading so in theory gpt4all should too), then it’s reasonably fast. I’ve achieved roughly half the speed of ChatGPT just on a 8 core amd fx with ddr3 ram.
On GPU it is okay. GTX-1080 with a R5 3700X.
It has just written a 24 page tourist info booklet about the town I live in and a bunch of it is very inaccurate or outdated on the places to go. Fun and impressive anyway. Took only a few minutes.
Are you looking to make an easy buck by generating novels and self publishing them on Amazon?