Microsoft’s penchant for making up names for thing that already have names is neither here nore there. It is an LLM, in fact its already twice as large as chatGTP2 (1.5B params).
I do think it’s a useful distinction considering open models can be more than 100B+ nowdays and GPT4 is rumored to be 1.7T params. Plus this class of models are far more likely to be on-device.
It’s not a LLM, it’s a much smaller model (~3B) which is closer to what Microsoft labels as a SLM (Small Language Models, e.g. MS Phi-3 Mini).
https://machinelearning.apple.com/research/introducing-apple-foundation-models
Microsoft’s penchant for making up names for thing that already have names is neither here nore there. It is an LLM, in fact its already twice as large as chatGTP2 (1.5B params).
I do think it’s a useful distinction considering open models can be more than 100B+ nowdays and GPT4 is rumored to be 1.7T params. Plus this class of models are far more likely to be on-device.