What new things can I do in Vision Pro regarding interacting with the real world?
Generated on 7/31/2024
1 search

This post is brought to you by Tailscan,
the best devtool for Tailwind CSS.
In Vision Pro, there are several new features that enhance interaction with the real world:
-
Hand Positioning: You can now decide if you want the user's hands to appear in front of or behind the content, providing more creative control in your app experience. This allows for more immersive and interactive experiences. Platforms State of the Union
-
Enhanced Scene Understanding: The fidelity of scene understanding capabilities has been significantly extended. Planes can now be detected in all orientations, allowing for anchoring objects on various surfaces in your surroundings. Platforms State of the Union
-
Room Anchors: The concept of room anchors has been introduced, which considers the user's surroundings on a per-room basis. This means you can detect a user's movement across rooms and create more context-aware applications. Platforms State of the Union
-
Object Tracking API: A new object tracking API for visionOS allows you to attach content to individual objects found around the user. This can be used to attach virtual content like instructions to physical objects, adding new dimensions of interactivity. Platforms State of the Union
-
Main Camera Access: With a new API, apps can access the device's main camera video feed. This can be used for various purposes, such as anomaly detection in a production line or providing expert guidance remotely. Introducing enterprise APIs for visionOS
-
Spatial Barcode and QR Code Scanning: Vision Pro can now automatically detect and parse barcodes and QR codes, allowing for custom app functionality based on these detected codes. This includes receiving information about the type of code, its spatial position relative to the user, and the payload content. Introducing enterprise APIs for visionOS
These features collectively enhance the ways you can interact with the real world using Vision Pro, making it a powerful tool for creating immersive and context-aware applications.

Design interactive experiences for visionOS
Learn how you can design a compelling interactive narrative experience for Apple Vision Pro from the designers of Encounter Dinosaurs. Discover how these types of experiences differ from existing apps, media, and games, and explore how to design narratives that bring audiences into new worlds. Find out how you can create stories that adapt to any space and size, provide multiple levels of interaction to make them accessible to all, and use animation, spatial audio, and custom gestures to further immerse people in your experience.

Platforms State of the Union
Discover the newest advancements on Apple platforms.

Optimize for the spatial web
Discover how to make the most of visionOS capabilities on the web. Explore recent updates like improvements to selection highlighting, and the ability to present spatial photos and panorama images in fullscreen. Learn to take advantage of existing web standards for dictation and text-to-speech with WebSpeech, spatial soundscapes with WebAudio, and immersive experiences with WebXR.

Introducing enterprise APIs for visionOS
Find out how you can use new enterprise APIs for visionOS to create spatial experiences that enhance employee and customer productivity on Apple Vision Pro.

Get started with HealthKit in visionOS
Discover how to use HealthKit to create experiences that take full advantage of the spatial canvas. Learn the capabilities of HealthKit on the platform, find out how to bring an existing iPadOS app to visionOS, and explore the special considerations governing HealthKit during a Guest User session. You’ll also learn ways to use SwiftUI, Swift Charts, and Swift concurrency to craft innovative experiences with HealthKit.