Main new API for Vision OS

Generated on 7/22/2024

1 search

Tailscan

This post is brought to you by Tailscan, the best devtool for Tailwind CSS.

The main new APIs for visionOS introduced at WWDC 2024 include:

  1. Enhanced Sensor Access: This API provides access to the main camera video feed, allowing apps to analyze and interpret the environment around Vision Pro. This can be used for various applications, such as anomaly detection on a production line. For more details, you can check the session Introducing enterprise APIs for visionOS.

  2. Object Tracking API: This new API allows you to attach content to individual objects found around the user, enhancing interactivity by attaching virtual content like instructions to physical objects. This was highlighted in the Platforms State of the Union.

  3. Known Object Tracking: This feature in VisionOS 2.0 allows apps to detect and track specific reference objects within their viewing area. It includes configurable parameters for tuning and optimizing object tracking. More information can be found in the session Introducing enterprise APIs for visionOS.

  4. New Vision API with Swift Enhancements: This API introduces streamlined syntax designed for Swift, full support for Swift concurrency, and new capabilities such as hand pose detection and body pose requests. For more details, refer to the session Discover Swift enhancements in the Vision framework.

These APIs collectively enhance the capabilities of visionOS, enabling developers to create more immersive and interactive spatial experiences.