what's the "use model"?
Asked on 06/14/2025
1 search
The term "use model" is not explicitly defined in the provided context from the WWDC sessions. However, the context does discuss various types of models in different frameworks and contexts, such as:
-
3D Models in RealityKit: In the session "Better together: SwiftUI and RealityKit," a model refers to a 3D object that can be placed in a scene, accessed via the model component, which consists of a mesh resource and materials.
-
Foundation Models Framework: In the session "Meet the Foundation Models framework," a model refers to a large language model used for tasks like summarization, extraction, and classification. These models are optimized for device-scale use cases and can be enhanced with specialized adapters for specific domains.
-
Machine Learning Models: In the session "Deploy machine learning and AI models on-device with Core ML," models are described as functions that take input and produce output, often represented as neural networks in Core ML.
If you are referring to a specific "use model" in a different context, please provide more details or clarify the context.

Better together: SwiftUI and RealityKit
Discover how to seamlessly blend SwiftUI and RealityKit in visionOS 26. We’ll explore enhancements to Model3D, including animation and ConfigurationCatalog support, and demonstrate smooth transitions to RealityView. You’ll learn how to leverage SwiftUI animations to drive RealityKit component changes, implement interactive manipulation, use new SwiftUI components for richer interactions, and observe RealityKit changes from your SwiftUI code. We’ll also cover how to use unified coordinate conversion for cross-framework coordinate transformations.

Meet the Foundation Models framework
Learn how to tap into the on-device large language model behind Apple Intelligence! This high-level overview covers everything from guided generation for generating Swift data structures and streaming for responsive experiences, to tool calling for integrating data sources and sessions for context management. This session has no prerequisites.

Explore prompt design & safety for on-device foundation models
Design generative AI experiences that leverage the strengths of the Foundation Models framework. We’ll start by showing how to design prompts for the on-device large language model at the core of Apple Intelligence. Then, we’ll introduce key ideas around AI safety, and offer concrete strategies to make your generative AI features safe, reliable, and delightful.