What is a location anchor?
Asked on 07/31/2024
1 search
A location anchor, as discussed in the context of Apple's WWDC sessions, is a type of anchor used in augmented reality (AR) to represent a specific position and orientation in the real world. It can be used to track and place virtual content relative to real-world objects or spaces. Here are some key points about location anchors from the sessions:
-
Room Anchors: These are used to model the geometry of a specific confined space, such as a room. They include information about the surrounding walls and floor, which can be useful for placing virtual content accurately within that space. Room anchors can help create experiences where virtual objects come to life when you enter a certain room (Create enhanced spatial computing experiences with ARKit).
-
Trackable Anchors: These are entities that the system can track over the course of a session. For example, a person's hand can be a trackable anchor entity. These anchors are used to position rendered content in AR applications (Render Metal with passthrough in visionOS).
-
Object Anchors: These allow real-world objects to be used as anchors. For instance, everyday objects can reveal useful information or launch immersive experiences when tracked by the app. This is particularly useful for creating interactive and informative AR experiences (Explore object tracking for visionOS).
These anchors are fundamental in creating immersive and interactive AR experiences by accurately mapping virtual content to the real world.

Build a spatial drawing app with RealityKit
Harness the power of RealityKit through the process of building a spatial drawing app. As you create an eye-catching spatial experience that integrates RealityKit with ARKit and SwiftUI, you’ll explore how resources work in RealityKit and how to use features like low-level mesh and texture APIs to achieve fast updates of the users’ brush strokes.

Render Metal with passthrough in visionOS
Get ready to extend your Metal experiences for visionOS. Learn best practices for integrating your rendered content with people’s physical environments with passthrough. Find out how to position rendered content to match the physical world, reduce latency with trackable anchor prediction, and more.

Explore object tracking for visionOS
Find out how you can use object tracking to turn real-world objects into virtual anchors in your visionOS app. Learn how you can build spatial experiences with object tracking from start to finish. Find out how to create a reference object using machine learning in Create ML and attach content relative to your target object in Reality Composer Pro, RealityKit or ARKit APIs.