what is CoreML
Generated on 8/9/2024
1 search
Core ML is a framework provided by Apple that allows developers to deploy machine learning models on Apple devices, such as iOS, iPadOS, and macOS. It is designed to optimize the performance of AI models by automatically distributing workloads across the CPU, GPU, and neural engine, maximizing hardware utilization. Core ML simplifies the development workflow by integrating with Xcode and provides a unified API for on-device inference across various model types.
Core ML supports a wide array of models, including those developed in PyTorch, which can be converted into the Core ML format using Core ML tools. These tools offer optimization techniques like quantization and efficient key-value caching for large language models. Additionally, Core ML includes new features such as ML tensor types, multifunction models, and updated performance reports to help developers efficiently deploy and run state-of-the-art AI models on Apple devices.
For more detailed information, you can refer to the session Explore machine learning on Apple platforms (07:16) and Deploy machine learning and AI models on-device with Core ML (01:07).
Deploy machine learning and AI models on-device with Core ML
Learn new ways to optimize speed and memory performance when you convert and run machine learning and AI models through Core ML. We’ll cover new options for model representations, performance insights, execution, and model stitching which can be used together to create compelling and private on-device experiences.
Explore machine learning on Apple platforms
Get started with an overview of machine learning frameworks on Apple platforms. Whether you’re implementing your first ML model, or an ML expert, we’ll offer guidance to help you select the right framework for your app’s needs.
Platforms State of the Union
Discover the newest advancements on Apple platforms.
What’s new in Create ML
Explore updates to Create ML, including interactive data source previews and a new template for building object tracking models for visionOS apps. We’ll also cover important framework improvements, including new time-series forecasting and classification APIs.
Bring your machine learning and AI models to Apple silicon
Learn how to optimize your machine learning and AI models to leverage the power of Apple silicon. Review model conversion workflows to prepare your models for on-device deployment. Understand model compression techniques that are compatible with Apple silicon, and at what stages in your model deployment workflow you can apply them. We’ll also explore the tradeoffs between storage size, latency, power usage and accuracy.