大模型
Asked on 03/04/2025
1 search
It seems like you're interested in large models, which are often referred to as "大模型" in Chinese. At WWDC 2024, Apple discussed various techniques for bringing large machine learning and AI models to Apple silicon, focusing on model compression and optimization to make them suitable for devices like iPhones and iPads.
One of the sessions, Bring your machine learning and AI models to Apple silicon, covers techniques such as quantization and pruning to reduce model size while maintaining performance. For instance, they demonstrated compressing a large model from 13 GB to less than 4 GB using quantization techniques.
Additionally, the Platforms State of the Union session introduced the concept of on-device foundation models, which are large language models optimized to run efficiently on Apple devices.
If you have more specific questions about these techniques or sessions, feel free to ask!

Platforms State of the Union
Discover the newest advancements on Apple platforms.

Deploy machine learning and AI models on-device with Core ML
Learn new ways to optimize speed and memory performance when you convert and run machine learning and AI models through Core ML. We’ll cover new options for model representations, performance insights, execution, and model stitching which can be used together to create compelling and private on-device experiences.

Explore object tracking for visionOS
Find out how you can use object tracking to turn real-world objects into virtual anchors in your visionOS app. Learn how you can build spatial experiences with object tracking from start to finish. Find out how to create a reference object using machine learning in Create ML and attach content relative to your target object in Reality Composer Pro, RealityKit or ARKit APIs.