What new features are in Core Audio

Generated on 11/27/2024

1 search

The new features in Core Audio presented at WWDC 2024 include several advancements across different sessions:

  1. RealityKit Audio Enhancements: The session "Enhance your spatial computing app with RealityKit audio" introduces new audio APIs in RealityKit that enhance spatial computing experiences. These include spatial audio sources with real-time generated audio, collision sounds, reverb presets, and audio mix groups to create an immersive audio experience. The session also discusses using audio units and the audio generator controller for real-time audio processing. Enhance your spatial computing app with RealityKit audio

  2. Game Audio Frameworks: In the session "Port advanced games to Apple platforms," Apple highlights the use of the Face framework, a physical audio specialization engine, to create rich, dynamic audio experiences for games. This framework helps simulate complex audio effects like geometric sound occlusion. The session also mentions the integration of popular middleware like Audio Kinetic Wwise and FMOD for playing music and sound effects in games. Port advanced games to Apple platforms

  3. Real-time ML Inference with Audio: The session "Support real-time ML inference on the CPU" discusses using audio units with machine learning to create or modify audio and MIDI data. This includes functionalities like separating audio to isolate vocals or applying timbre transfer to change instrument sounds. Support real-time ML inference on the CPU

These features collectively enhance the capabilities of Core Audio, providing developers with more tools to create immersive and interactive audio experiences across Apple's platforms.

Discover RealityKit APIs for iOS, macOS and visionOS

Discover RealityKit APIs for iOS, macOS and visionOS

Learn how new cross-platform APIs in RealityKit can help you build immersive apps for iOS, macOS, and visionOS. Check out the new hover effects, lights and shadows, and portal crossing features, and view them in action through real examples.

Platforms State of the Union

Platforms State of the Union

Discover the newest advancements on Apple platforms.

Port advanced games to Apple platforms

Port advanced games to Apple platforms

Discover how simple it can be to reach players on Apple platforms worldwide. We’ll show you how to evaluate your Windows executable on Apple silicon, start your game port with code samples, convert your shader code to Metal, and bring your game to Mac, iPhone, and iPad. Explore enhanced Metal tools that understand HLSL shaders to validate, debug, and profile your ported shaders on Metal.

Support real-time ML inference on the CPU

Support real-time ML inference on the CPU

Discover how you can use BNNSGraph to accelerate the execution of your machine learning model on the CPU. We will show you how to use BNNSGraph to compile and execute a machine learning model on the CPU and share how it provides real-time guarantees such as no runtime memory allocation and single-threaded running for audio or signal processing models.

Enhance your spatial computing app with RealityKit audio

Enhance your spatial computing app with RealityKit audio

Elevate your spatial computing experience using RealityKit audio. Discover how spatial audio can make your 3D immersive experiences come to life. From ambient audio, reverb, to real-time procedural audio that can add character to your 3D content, learn how RealityKit audio APIs can help make your app more engaging.