How do I load a CoreML model?
Asked on 03/21/2025
1 search
To load a CoreML model, you can use the Core ML tools Python API. Here's a general workflow based on the information from the WWDC sessions:
-
Convert the Model: Start by converting your model into the Core ML format using Core ML tools. This involves using the
ct.convert
function to convert models from frameworks like PyTorch or TensorFlow into Core ML models. -
Optimize the Model: During conversion, you can optimize the model for Apple hardware using various techniques such as quantization and efficient key-value caching, especially for large language models.
-
Load the Model: Once you have your model in the Core ML format, you can load it using the Core ML tools Python API. If you're working with multifunction models, you can specify the function name to load a specific function within the model.
-
Run Inference: After loading the model, you can perform predictions as usual using the Core ML framework. Core ML automatically optimizes execution across the CPU, GPU, and neural engine to maximize performance.
For more detailed information, you can refer to the session Bring your machine learning and AI models to Apple silicon (26:24) which covers multifunction models and their usage.

What’s new in Create ML
Explore updates to Create ML, including interactive data source previews and a new template for building object tracking models for visionOS apps. We’ll also cover important framework improvements, including new time-series forecasting and classification APIs.

Platforms State of the Union
Discover the newest advancements on Apple platforms.

Explore object tracking for visionOS
Find out how you can use object tracking to turn real-world objects into virtual anchors in your visionOS app. Learn how you can build spatial experiences with object tracking from start to finish. Find out how to create a reference object using machine learning in Create ML and attach content relative to your target object in Reality Composer Pro, RealityKit or ARKit APIs.