What new apis are available for vision pro

Generated on 8/5/2024

1 search

Apple introduced several new APIs for Vision Pro at WWDC 2024. Here are the key highlights:

  1. Enhanced Sensor Access:

    • Main Camera Access: Apps can now access the device's main camera video feed, allowing them to analyze and interpret the environment around Vision Pro. This can be used for tasks like anomaly detection on a production line.
    • Improved Capture and Streaming: Enhanced capabilities for capturing and streaming content using the Vision Pro's camera.
  2. Platform Control:

    • Advanced Machine Learning Capabilities: Leveraging the Apple Neural Engine for enhanced object tracking and performance tuning.
    • Enhanced Object Tracking: New object tracking API allows attaching content to individual objects found around the user, enabling interactive experiences like attaching virtual instructions to physical objects.
  3. Known Object Tracking:

    • Parameter Adjustment: Allows apps to detect and track specific reference objects within their viewing area with configurable parameters for tuning and optimizing object tracking.
  4. Scene Understanding:

    • Improved Scene Understanding: Planes can now be detected in all orientations, and room anchors can consider the user's surroundings on a per-room basis. This includes detecting user movement across rooms.
  5. Swift Enhancements in Vision Framework:

    • New Vision API: Streamlined syntax designed for Swift, full support for Swift concurrency, and new capabilities in vision, such as detecting faces, recognizing text, and generating image descriptions.

For more detailed information, you can refer to the following sessions: