Was anything new introduced related to CoreML or the COreML development flow?
Generated on 8/2/2024
1 search
Yes, several new features and updates related to CoreML and the CoreML development flow were introduced at WWDC 2024. Here are some of the key highlights:
-
CreateML Enhancements:
- The CreateML app now includes an object tracking template, which allows you to train reference objects to anchor spatial experiences on visionOS.
- New components for time series classification and forecasting are available in the framework for integration within your app.
- Easier exploration and inspection of data annotations prior to training (Explore machine learning on Apple platforms).
-
CoreML Tools:
- Introduction of new model compression techniques, ability to represent state in models, transformer-specific operations, and multifunction models (Explore machine learning on Apple platforms).
- CoreML segments models across CPU, GPU, and neural engine automatically to maximize hardware utilization (Explore machine learning on Apple platforms).
-
Performance and Integration:
- CoreML provides performance critical for great user experience while simplifying the development workflow with Xcode integration.
- New ML tensor type designed to simplify the computational glue code stitching models together.
- Management of key-value caches for efficient decoding of large language models with states.
- Use of functions to choose a specific style adapter in an image generation model at runtime (Explore machine learning on Apple platforms).
-
Model Execution and Optimization:
- CoreML tools model optimization toolkit includes enhancements for model compression and optimization for Apple hardware.
- Performance reports have been updated to provide better insights (Explore machine learning on Apple platforms).
-
Stateful and Multifunction Models:
- Introduction of stateful models and multifunction models to improve the efficiency and flexibility of model deployment (Bring your machine learning and AI models to Apple silicon).
For more detailed information, you can check out the following sessions:
Platforms State of the Union
Discover the newest advancements on Apple platforms.
What’s new in Create ML
Explore updates to Create ML, including interactive data source previews and a new template for building object tracking models for visionOS apps. We’ll also cover important framework improvements, including new time-series forecasting and classification APIs.
Bring your machine learning and AI models to Apple silicon
Learn how to optimize your machine learning and AI models to leverage the power of Apple silicon. Review model conversion workflows to prepare your models for on-device deployment. Understand model compression techniques that are compatible with Apple silicon, and at what stages in your model deployment workflow you can apply them. We’ll also explore the tradeoffs between storage size, latency, power usage and accuracy.
Explore machine learning on Apple platforms
Get started with an overview of machine learning frameworks on Apple platforms. Whether you’re implementing your first ML model, or an ML expert, we’ll offer guidance to help you select the right framework for your app’s needs.
Deploy machine learning and AI models on-device with Core ML
Learn new ways to optimize speed and memory performance when you convert and run machine learning and AI models through Core ML. We’ll cover new options for model representations, performance insights, execution, and model stitching which can be used together to create compelling and private on-device experiences.