Can I use the new ML libraries to make queries to the cloud based secure inference?
Asked on 06/11/2025
1 search
The content from the WWDC sessions does not mention any specific new ML libraries for making queries to cloud-based secure inference. The focus is primarily on deploying and optimizing machine learning models on Apple devices using Core ML and other frameworks like MLX for on-device inference. These tools are designed to leverage Apple Silicon's capabilities for efficient and low-latency inference on-device rather than cloud-based inference.
If you are interested in learning more about deploying machine learning models on Apple devices, you might want to check out the session Deploy machine learning and AI models on-device with Core ML which covers various features and optimizations for on-device model deployment.

Explore machine learning on Apple platforms
Get started with an overview of machine learning frameworks on Apple platforms. Whether you’re implementing your first ML model, or an ML expert, we’ll offer guidance to help you select the right framework for your app’s needs.

Deploy machine learning and AI models on-device with Core ML
Learn new ways to optimize speed and memory performance when you convert and run machine learning and AI models through Core ML. We’ll cover new options for model representations, performance insights, execution, and model stitching which can be used together to create compelling and private on-device experiences.

Meet the Foundation Models framework
Learn how to tap into the on-device large language model behind Apple Intelligence! This high-level overview covers everything from guided generation for generating Swift data structures and streaming for responsive experiences, to tool calling for integrating data sources and sessions for context management. This session has no prerequisites.