How do I create an arobject
Generated on 3/10/2025
1 search
To create an AR object, you can follow these steps based on the information from the WWDC sessions:
-
3D Model Creation: First, you need a 3D model of the object you want to use. This model should be in the USDZ format, which is suitable for AR applications. You can create this model using tools like Reality Composer or find models online.
-
Reference Object Creation: Use the CreateML app to generate a reference object from your 3D model. This involves training a machine learning model to recognize the object. The CreateML app provides a spatial object tracking template to facilitate this process. This step is crucial for enabling object tracking in AR applications.
-
Object Tracking: Once you have your reference object, you can use ARKit to track it. Load the reference object into your ARKit session and configure an object tracking provider. This allows you to anchor virtual content to the real-world object.
For more detailed guidance, you can refer to the session Explore object tracking for visionOS (05:07) which covers creating a reference object, and Create enhanced spatial computing experiences with ARKit (06:46) which discusses object tracking.

Render Metal with passthrough in visionOS
Get ready to extend your Metal experiences for visionOS. Learn best practices for integrating your rendered content with people’s physical environments with passthrough. Find out how to position rendered content to match the physical world, reduce latency with trackable anchor prediction, and more.

What’s new in Create ML
Explore updates to Create ML, including interactive data source previews and a new template for building object tracking models for visionOS apps. We’ll also cover important framework improvements, including new time-series forecasting and classification APIs.

Create enhanced spatial computing experiences with ARKit
Learn how to create captivating immersive experiences with ARKit’s latest features. Explore ways to use room tracking and object tracking to further engage with your surroundings. We’ll also share how your app can react to changes in your environment’s lighting on this platform. Discover improvements in hand tracking and plane detection which can make your spatial experiences more intuitive.

Explore object tracking for visionOS
Find out how you can use object tracking to turn real-world objects into virtual anchors in your visionOS app. Learn how you can build spatial experiences with object tracking from start to finish. Find out how to create a reference object using machine learning in Create ML and attach content relative to your target object in Reality Composer Pro, RealityKit or ARKit APIs.

Introducing enterprise APIs for visionOS
Find out how you can use new enterprise APIs for visionOS to create spatial experiences that enhance employee and customer productivity on Apple Vision Pro.

Optimize for the spatial web
Discover how to make the most of visionOS capabilities on the web. Explore recent updates like improvements to selection highlighting, and the ability to present spatial photos and panorama images in fullscreen. Learn to take advantage of existing web standards for dictation and text-to-speech with WebSpeech, spatial soundscapes with WebAudio, and immersive experiences with WebXR.

Optimize your 3D assets for spatial computing
Dive into an end-to-end workflow for optimized 3D asset creation. Discover best practices for optimizing meshes, materials, and textures in your digital content creation tool. Learn how to harness shader graph, baking, and material instances to enhance your 3D scene while optimizing performance. Take advantage of native tools to work more effectively with your assets and improve your app’s performance.