For context, Alibaba is behind Qwen 2.5, which is the series of LLMs for desktop/self-hosting use. Most of the series is Apache licensed, free to use, and they’re what Deepseek based their smaller distillations on. Thier 32B/72B models, especially finetunes of them, can run circles around cheaper OpenAI models you’d need an $100,000+ fire-breathing server to run… if OpenAI actually realeased anything for public use.
I have Qwen 2.5 coder or another derivate loaded on my desktop pretty much every day.
So… Yeah, if I were Apple, I would’ve picked Qwen/Alibaba too. They’re the undisputed king of models that would fit in an iDevice’s memory pool, at the moment, and do it for a fraction of the cost/energy usage of US companies.
For whatever reason, the Chinese companies tend to go with very permissive licensing, while Cohere, Mistral, Meta (Llama), and some others add really weird commercial restrictions (though this trend may be reversing). IBM Granite is actually Apache 2.0 and way more “open” and documented than even the Chinese tech companies, but unfortunately their models are “small” (3B) and not very cutting edge.
Another note: Huggingface Transformers (Also Apache 2.0, and from a US company) is the reference code to run open models, but there are many other implementations to choose from (exllama, mlc-llm, vllm, llama.cpp, internlm, lorax, Text Generation Inference, Apple MLX, just to name a few).
For context, Alibaba is behind Qwen 2.5, which is the series of LLMs for desktop/self-hosting use. Most of the series is Apache licensed, free to use, and they’re what Deepseek based their smaller distillations on. Thier 32B/72B models, especially finetunes of them, can run circles around cheaper OpenAI models you’d need an $100,000+ fire-breathing server to run… if OpenAI actually realeased anything for public use.
I have Qwen 2.5 coder or another derivate loaded on my desktop pretty much every day.
So… Yeah, if I were Apple, I would’ve picked Qwen/Alibaba too. They’re the undisputed king of models that would fit in an iDevice’s memory pool, at the moment, and do it for a fraction of the cost/energy usage of US companies.
How do you Apache license a LLM? Do they just treat the weights as code?
It’s software, so yeah, I suppose. See for yourself: https://huggingface.co/Qwen/QwQ-32B-Preview
Deepseek chose MIT: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
For whatever reason, the Chinese companies tend to go with very permissive licensing, while Cohere, Mistral, Meta (Llama), and some others add really weird commercial restrictions (though this trend may be reversing). IBM Granite is actually Apache 2.0 and way more “open” and documented than even the Chinese tech companies, but unfortunately their models are “small” (3B) and not very cutting edge.
Another note: Huggingface Transformers (Also Apache 2.0, and from a US company) is the reference code to run open models, but there are many other implementations to choose from (exllama, mlc-llm, vllm, llama.cpp, internlm, lorax, Text Generation Inference, Apple MLX, just to name a few).
Deepseek R1 is currently the selfhosting model to use