size of foundational models

Asked on 06/11/2025

1 search

The foundational models discussed at Apple's WWDC are designed to run on devices and have around 3 billion parameters. This is significantly smaller than server-based large language models (LLMs) like ChatGPT, which have hundreds of billions of parameters. The on-device model is optimized for tasks such as summarization, extraction, and classification, but it is not designed for tasks requiring extensive world knowledge or advanced reasoning. This smaller size allows the model to run efficiently on devices like iPhones and iPads, maintaining privacy and offline capabilities.

For more details, you can refer to the session Explore prompt design & safety for on-device foundation models (03:01) and Meet the Foundation Models framework (02:57).