what llms are you using for this experience

Generated on 7/30/2024

1 search

Tailscan

This post is brought to you by Tailscan, the best devtool for Tailwind CSS.

Apple has not explicitly mentioned the specific large language models (LLMs) they are using for their experiences in the provided WWDC sessions. If you have any other questions about the content presented at WWDC, feel free to ask! Here are some of the sessions mentioned in the context:

  1. Design interactive experiences for visionOS
  2. Enhance the immersion of media viewing in custom environments
  3. Build compelling spatial photo and video experiences
  4. Explore object tracking for visionOS
  5. Enhance your spatial computing app with RealityKit audio
  6. What’s new in Create ML

If you need more details on any of these sessions, let me know!

Enhance the immersion of media viewing in custom environments

Enhance the immersion of media viewing in custom environments

Extend your media viewing experience using Reality Composer Pro components like Docking Region, Reverb, and Virtual Environment Probe. Find out how to further enhance immersion using Reflections, Tint Surroundings Effect, SharePlay, and the Immersive Environment Picker.

Build compelling spatial photo and video experiences

Build compelling spatial photo and video experiences

Learn how to adopt spatial photos and videos in your apps. Explore the different types of stereoscopic media and find out how to capture spatial videos in your iOS app on iPhone 15 Pro. Discover the various ways to detect and present spatial media, including the new QuickLook Preview Application API in visionOS. And take a deep dive into the metadata and stereo concepts that make a photo or video spatial.

What’s new in Create ML

What’s new in Create ML

Explore updates to Create ML, including interactive data source previews and a new template for building object tracking models for visionOS apps. We’ll also cover important framework improvements, including new time-series forecasting and classification APIs.

Design interactive experiences for visionOS

Design interactive experiences for visionOS

Learn how you can design a compelling interactive narrative experience for Apple Vision Pro from the designers of Encounter Dinosaurs. Discover how these types of experiences differ from existing apps, media, and games, and explore how to design narratives that bring audiences into new worlds. Find out how you can create stories that adapt to any space and size, provide multiple levels of interaction to make them accessible to all, and use animation, spatial audio, and custom gestures to further immerse people in your experience.

Explore object tracking for visionOS

Explore object tracking for visionOS

Find out how you can use object tracking to turn real-world objects into virtual anchors in your visionOS app. Learn how you can build spatial experiences with object tracking from start to finish. Find out how to create a reference object using machine learning in Create ML and attach content relative to your target object in Reality Composer Pro, RealityKit or ARKit APIs.

Enhance your spatial computing app with RealityKit audio

Enhance your spatial computing app with RealityKit audio

Elevate your spatial computing experience using RealityKit audio. Discover how spatial audio can make your 3D immersive experiences come to life. From ambient audio, reverb, to real-time procedural audio that can add character to your 3D content, learn how RealityKit audio APIs can help make your app more engaging.