any update in CoreML?
Asked on 06/12/2025
1 search
At WWDC 2024, several updates were introduced for Core ML to enhance the deployment and performance of machine learning models on Apple devices. Here are some of the key updates:
-
Stateful Models: Core ML now supports stateful models, which automatically manage state tensors, improving performance by allowing in-place updates. This feature is particularly useful for models that need to maintain information across different runs. You can learn more about this in the session Bring your machine learning and AI models to Apple silicon.
-
ML Tensor: A new ML tensor type has been introduced to simplify the integration of models by acting as the computational glue code. This helps in stitching models together more efficiently. More details can be found in the session Deploy machine learning and AI models on-device with Core ML.
-
Multifunction Models: Core ML now supports multifunction models, allowing you to merge multiple models into one while deduplicating shared weights. This is useful for models that share a common feature extractor. This feature is discussed in the session Bring your machine learning and AI models to Apple silicon.
-
Performance Tools: New performance tools have been introduced to provide insights into the cost of each operation in your model, helping you optimize and debug your models more effectively. This is covered in the session Deploy machine learning and AI models on-device with Core ML.
These updates aim to make it easier to deploy and run machine learning models efficiently on Apple devices, leveraging the capabilities of Apple Silicon.

Deploy machine learning and AI models on-device with Core ML
Learn new ways to optimize speed and memory performance when you convert and run machine learning and AI models through Core ML. We’ll cover new options for model representations, performance insights, execution, and model stitching which can be used together to create compelling and private on-device experiences.

Bring your machine learning and AI models to Apple silicon
Learn how to optimize your machine learning and AI models to leverage the power of Apple silicon. Review model conversion workflows to prepare your models for on-device deployment. Understand model compression techniques that are compatible with Apple silicon, and at what stages in your model deployment workflow you can apply them. We’ll also explore the tradeoffs between storage size, latency, power usage and accuracy.

Explore machine learning on Apple platforms
Get started with an overview of machine learning frameworks on Apple platforms. Whether you’re implementing your first ML model, or an ML expert, we’ll offer guidance to help you select the right framework for your app’s needs.