- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
cross-posted from: https://lemm.ee/post/55428692
I’m no expert but with musk likely doing a hostile takeover of OpenAI I’m down with this.
Eh, no, still hard no.
Just because one evil becomes a bigger evil that doesn’t mean we should just use the other evil
True that. It’s the lesser evil.
True that. It’s the lesser evil. That would probably even be the case if it wasn’t just a deal for the Chinese market.
Ohh wow… Will Sam altman ask President musk to sanction the Chinese terrorists?
Musk might do it himself, since it technically competes with X AI.
Grok is a laughing stock in LLM land, though; all those H100s he bought are completely wasted. And he straight up lies about their open source approach.
For context, Alibaba is behind Qwen 2.5, which is the series of LLMs for desktop/self-hosting use. Most of the series is Apache licensed, free to use, and they’re what Deepseek based their smaller distillations on. Thier 32B/72B models, especially finetunes of them, can run circles around cheaper OpenAI models you’d need an $100,000+ fire-breathing server to run… if OpenAI actually realeased anything for public use.
I have Qwen 2.5 coder or another derivate loaded on my desktop pretty much every day.
So… Yeah, if I were Apple, I would’ve picked Qwen/Alibaba too. They’re the undisputed king of models that would fit in an iDevice’s memory pool, at the moment, and do it for a fraction of the cost/energy usage of US companies.
Deepseek R1 is currently the selfhosting model to use
How do you Apache license a LLM? Do they just treat the weights as code?
It’s software, so yeah, I suppose. See for yourself: https://huggingface.co/Qwen/QwQ-32B-Preview
Deepseek chose MIT: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
For whatever reason, the Chinese companies tend to go with very permissive licensing, while Cohere, Mistral, Meta (Llama), and some others add really weird commercial restrictions (though this trend may be reversing). IBM Granite is actually Apache 2.0 and way more “open” and documented than even the Chinese tech companies, but unfortunately their models are “small” (3B) and not very cutting edge.
Another note: Huggingface Transformers (Also Apache 2.0, and from a US company) is the reference code to run open models, but there are many other implementations to choose from (exllama, mlc-llm, vllm, llama.cpp, internlm, lorax, Text Generation Inference, Apple MLX, just to name a few).
A match made in Hell.
Apple’s outsourced just about every other aspect of its business to Chinese tech companies. I don’t see why this would be different.
here’s a novel idea…how about no AI in a phone?
I didn’t need one in 1999. I didn’t need one in 2012. I didn’t need one in 2024.
I don’t need one now.
I will NEVER need AI built into my phone.
Since people don’t click through and read articles, it should be pointed out that this is specifically for the Chinese market in order for Apple to try and get back market share in China where they’ve been falling behind Huawei and others.
Or they could just lower the prices
It’s not prices so much as it is the fact that Chinese services are huge for Chinese citizens and not having them is a detriment.
The bloomberg article was paywalled for me, and I didn’t see that in OP’s link, which is like 3 sentences long.
But honestly, they should’ve used Alibaba/Qwen for their other markets, too.
they should’ve used Alibaba/Qwen for their other markets, too.
Which would get them banned from all government use. Smart call.
Which is so stupid, as it’s more secure if Apple (or the government) just hosts it themselves, assuming its not run on-device.
Reminder, Apple was always going to use a Chinese service for China. ChatGPT is banned in China.
Third party model integration is pretty dumb in Apple intelligence. When Apple’s private model hits a dead end, it asks if you want to throw the prompt into a bigger model.
Eventually, like with search, people will be able to select the default model that they integrate with.