Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Apple Music API High Error Rate
I am getting high error rates from the Apple Music API. This has been happening for months now, and it is quite frustrating. It is a mix of 404, 504, and random 500 errors. I hit these endpoints all of the time, so it is not like I am hitting a resource that doesn't exist. Why is this happening? Is this a known issue that is getting worked on?
0
0
97
Jun ’25
ScaleTimeRange will cause noise in sound
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why. Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
0
0
138
Jul ’25
Getting CoreMediaErrorDomain -15628 playback failure in iOS 26 (AVPlayer, HLS stream)
Hi, After updating to iOS 26, our app is experiencing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier. Error: Domain [CoreMediaErrorDomain] Code [-15628] Description [The operation couldn’t be completed.] Underlying Error Domain [(null)] Code [0] Description [(null)] Environment: iOS version: iOS 26 Stream type: HLS (m3u8) with segment (.ts) files Observed behaviour: We don’t have concrete steps to reproduce the issue, but so far, we have observed that this error tends to occur under low network conditions.
0
5
537
Sep ’25
coreaudiod display sleep
hi all, as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display. can i somehow stop the insertion of the display sleep assertion? pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep" Created for PID: 4145. where PID 4145 is spotify. but it doesn't matter which app is playing the audio. any help would be appreciated thanks
0
0
84
Nov ’25
NonLiDAR Photogrammetry
Hello! In iOS1.7.5, photogrammetry sessions cannot be performed on iPhones without LiDAR, but I don't think there is much difference in GPU performance between those with and without LiDAR. For example, the chips installed in the iPhone 14 Pro and iPhone 15 are the same A16 Bionic, and I think the GPU performance is also the same. Despite this, photogrammetry can be performed on the iPhone 14 Pro but not on the iPhone 15. Why is this? In fact, we have confirmed that if you transfer images taken with an iPhone 16 without LiDAR to an iPhone 16 Pro and run a photogrammetry session using those images, a 3D model can be generated. Also, will photogrammetry be able to be performed on high-performance iPhones without LiDAR in the future?
0
0
93
Apr ’25
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
0
0
379
Dec ’25
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
0
0
107
4d
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
0
0
305
Dec ’25
How to Get Device Orientation in Background (PiP) Mode?
My app is a camera app that supports Picture-in-Picture (PiP) mode. Normally, when the device rotates, I get the device orientation from iOS and use it to rotate the camera feed so that the preview stays correctly aligned. However, when the app enters PiP mode, it is considered to be in the background, and I can no longer receive orientation updates from the system. As a result, I can’t apply rotation corrections to the camera video in PiP mode. Is there any way to retrieve device orientation while the app is in the background (specifically during PiP mode)? Any guidance would be greatly appreciated. Thank you!
0
0
76
May ’25
MusicKit broken in simulator with current tools? Can't get token.
Just updated my computer, phone, and dev tools to the latest versions of everything. Now when I run my app in a previously-working simulator (iPhone 16 w. iOS 18.5) I get: Failed retrieving MusicKit tokens: fetching the developer token is not supported in the simulator when running on this version of macOS; please upgrade your Mac to macOS Ventura. Also: <ICCloudServiceStatusMonitor: 0x600003320e60>: Invoking 1 completion handler for MusicKit tokens. error=<ICError.DeveloperTokenFetchingFailed (-8200) "Failed to fetch media token from <AMSMediaTokenService: 0x6000029049a0>." { underlyingErrors: [ <AMSErrorDomain.300 "Token request encoding failed The token request encoder finished with an error." { userInfo: { AMSDescription : "Token request encoding failed", AMSFailureReason : "The token request encoder finished with an error." }; underlyingErrors: [ <AMSErrorDomain.5 "Anisette Failed Platform not supported" { userInfo: { AMSDescription : "Anisette Failed", AMSFailureReason : "Platform not supported" }; Anybody know what gives here? The Ventura message is absurd because I'm on Tahoe 26.1. The same code works on a physical phone running iOS 26.
0
0
317
Nov ’25
Playing FairPlay encrypted content works fine on ios17 but won't play on ios26
For devices that are still on ios17, playing Fairplay encrypted content still works fine. For devices that I've upgraded to ios26 playing the same content in the same app no longer works. I can advance and see the stream frames by tapping +10 scrubbing so I know that the content is being decrypted but tapping the play button of AVPlayer for an AVPlayerItem now does nothing in ios26. Is this a breaking change or is there a stricter requirement that I now have to implement?
0
0
198
Oct ’25
How to toggle usb device
When I use IOKit/usb/IOUSBLib to toggle build-in camera, I got an ERROR:ret IOReturn -536870210 How can I resolve it? Can I use IOUSBLib to disable or hide build-in camera? My environment: Model Name: MacBook Pro ProductVersion: 15.5 Model Identifier: MacBookPro15,2 Processor Name: Quad-Core Intel Core i5 Processor Speed: 2.4 GHz Number of Processors: 1 // 禁用/启用USB设备 bool toggleUSBDevice(uint16_t vendorID, uint16_t productID, bool enable) { std::cout << (enable ? "Enabling" : "Disabling") << " USB device with VID: 0x" << std::hex << vendorID << ", PID: 0x" << productID << std::endl; // 创建匹配字典查找指定VID/PID的USB设备 CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName); if (!matchingDict) { std::cerr << "Failed to create USB device matching dictionary." << std::endl; return false; } // 设置VID/PID匹配条件 CFNumberRef vendorIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &vendorID); CFNumberRef productIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &productID); CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), vendorIDRef); CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), productIDRef); CFRelease(vendorIDRef); CFRelease(productIDRef); // 获取匹配的设备迭代器 io_iterator_t deviceIterator; if (IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &deviceIterator) != KERN_SUCCESS) { std::cerr << "Failed to get USB device iterator." << std::endl; CFRelease(matchingDict); return false; } io_service_t usbDevice; bool result = false; int deviceCount = 0; // 遍历所有匹配的设备 while ((usbDevice = IOIteratorNext(deviceIterator)) != IO_OBJECT_NULL) { deviceCount++; // 获取设备路径 char path[1024]; if (IORegistryEntryGetPath(usbDevice, kIOServicePlane, path) == KERN_SUCCESS) { std::cout << "Found device at path: " << path << std::endl; } // 打开设备 IOCFPlugInInterface** plugInInterface = NULL; IOUSBDeviceInterface** deviceInterface = NULL; SInt32 score; IOReturn ret = IOCreatePlugInInterfaceForService( usbDevice, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); if (ret == kIOReturnSuccess && plugInInterface) { ret = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&deviceInterface); (*plugInInterface)->Release(plugInInterface); } if (ret != kIOReturnSuccess) { std::cerr << "Failed to open USB device interface. Error:" << ret << std::endl; IOObjectRelease(usbDevice); continue; } // 禁用/启用设备 if (enable) { // 启用设备 - 重新配置设备 ret = (*deviceInterface)->USBDeviceReEnumerate(deviceInterface, 0); if (ret == kIOReturnSuccess) { std::cout << "Device enabled successfully." << std::endl; result = true; } else { std::cerr << "Failed to enable device. Error: " << ret << std::endl; } } else { // 禁用设备 - 断开设备连接 ret = (*deviceInterface)->USBDeviceClose(deviceInterface); if (ret == kIOReturnSuccess) { std::cout << "Device disabled successfully." << std::endl; result = true; } else { std::cerr << "Failed to disable device. Error: " << ret << std::endl; } } // 关闭设备接口 (*deviceInterface)->Release(deviceInterface); IOObjectRelease(usbDevice); } IOObjectRelease(deviceIterator); if (deviceCount == 0) { std::cerr << "No device found with specified VID/PID." << std::endl; return false; } return result; }
0
0
231
Jun ’25
Can individual Apple Developer accounts stream full tracks with MusicKit?
I have implemented fetching Apple Music preview songs using a Swift framework integrated into a Unity app. My requirement is to fetch full tracks from a user’s Apple Music library and play them inside Unity. To do this, I understand that I need to handle authentication, generate a Developer Token, and then obtain a Music User Token to access the user’s Apple Music content. Currently, I have an Individual Apple Developer account (not Organization). Based on my research, it seems that: With an Individual account, I can implement this functionality and even upload builds to TestFlight for internal testing. However, when releasing the app publicly on the App Store, full-track playback may be restricted for Individual accounts and allowed only for Organization accounts. 👉 Can you confirm if this understanding is correct? 👉 Specifically, is it possible for an Individual account to fetch and play full-length tracks from a subscribed Apple Music user’s library (at least for internal/TestFlight testing)?
0
0
187
Sep ’25
Access to favorited artists from Music API?
It's been well over a year since Apple added favoriting of artists back to Apple Music (the little star icon on an artist page), but yet I still haven't seen a way to get this data from an authenticated user from Music API. I was expecting to hear something about this during the WWDC, but there have been no announcements that I've caught. Has anyone else heard anything? People assume when they provide access to their Apple Music account that we can actually get to the data in their Apple Music account, and we end up looking a little dumb not being able to get this core data.
0
1
307
Jul ’25
CarPlay: Voice Conversational Entitlement Details
With the Voice Conversational Entitlement, can a CarPlay app establish a turn-based audio interface that operates in two modes: Speaking mode: Audio Session configured for playback Buffered audio Listening mode: Switch Audio Session to .record or .playAndRecord Activate SFSpeechRecognizer And continue toggling back and forth. The app should listen for responses to questions or other audio cues, and assuming those answers are correct (based on analysis of results from SFSpeechRecognizer), continue this pattern of mode 1 and 2 alternating. This appears to be a valid use of this entitlement. Does this also require the Audio App Entitlement, or is the Voice Conversational Entitlement sufficient? Are there other obstacles to this type of app that I'm not seeing? Or perhaps this is technically possible, but unlikely to pass app store review?
0
0
85
3d
Async AVAudioPlayerNode.scheduleBuffer stutters
My code that streams buffers into AVAudioPlayerNode is stuttering when the buffer is finished and before the next one is played. while engine.isRunning { let framesToCopy = min(buffer.frameLength - framePosition, Self.BufferSize) let srcRaw = UnsafeRawPointer(srcPtr) let playbackBuffer = AVAudioPCMBuffer(pcmFormat: buffer.format, frameCapacity: Self.BufferSize)! let playbackPtr = playbackBuffer.floatChannelData![0] let destRaw = UnsafeMutableRawPointer(mutating: playbackPtr) memcpy(destRaw, srcRaw, Int(framesToCopy) * MemoryLayout<Float>.stride) srcPtr = srcPtr.advanced(by: Int(framesToCopy)) playbackBuffer.frameLength = framesToCopy await player.scheduleBuffer(playbackBuffer, at: nil, options: [], completionCallbackType: .dataRendered) } I've tried to schedule multiple buffers at once using a combination of both the synchronous and async versions of scheduleBuffer because I thought the delay might be but it still stutters and the data copied into the playbackBuffer matches the source buffer. I've tried all combinations of options and completionCallbackType but no luck. I've tried increasing the buffer size but that just spaces out the stutters because the buffer is larger. What am I missing about this API?
0
0
75
Feb ’26
Apple Music for DJ App
Hi there, I recently launched a dj app to the mac app store, and was wondering how I could access songs for mixing purposes via Apple Music just like how serato, rekordbox, djay, and other DJ apps do? Thanks, Gunek
Replies
0
Boosts
0
Views
354
Activity
Nov ’25
Apple Music API High Error Rate
I am getting high error rates from the Apple Music API. This has been happening for months now, and it is quite frustrating. It is a mix of 404, 504, and random 500 errors. I hit these endpoints all of the time, so it is not like I am hitting a resource that doesn't exist. Why is this happening? Is this a known issue that is getting worked on?
Replies
0
Boosts
0
Views
97
Activity
Jun ’25
ScaleTimeRange will cause noise in sound
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why. Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
Replies
0
Boosts
0
Views
138
Activity
Jul ’25
Getting CoreMediaErrorDomain -15628 playback failure in iOS 26 (AVPlayer, HLS stream)
Hi, After updating to iOS 26, our app is experiencing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier. Error: Domain [CoreMediaErrorDomain] Code [-15628] Description [The operation couldn’t be completed.] Underlying Error Domain [(null)] Code [0] Description [(null)] Environment: iOS version: iOS 26 Stream type: HLS (m3u8) with segment (.ts) files Observed behaviour: We don’t have concrete steps to reproduce the issue, but so far, we have observed that this error tends to occur under low network conditions.
Replies
0
Boosts
5
Views
537
Activity
Sep ’25
coreaudiod display sleep
hi all, as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display. can i somehow stop the insertion of the display sleep assertion? pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep" Created for PID: 4145. where PID 4145 is spotify. but it doesn't matter which app is playing the audio. any help would be appreciated thanks
Replies
0
Boosts
0
Views
84
Activity
Nov ’25
Infrequent Sound inconsitency with Apple Music
Since MacOS 26 Apple Music has inconsitent drops to the Quality of some Tracks indiscrimantly. I don't know if others Expereinced it. It doesn't happen on the Speakers or connected via Bluetooth, but the AUX I/O has it quite often. It is more noticable on Headphones with 48kHz and higher Frequency Bandwidth. Here is the FB18062589
Replies
0
Boosts
0
Views
184
Activity
Jul ’25
Audio Unit MIDI Plugin documentation
Hi folks - I'm having trouble finding specific documentation about Audio Unit MIDI plugins - as in MIDI -only. Any suggestions welcome as searches aren't returning much. (too niche? user error?)
Replies
0
Boosts
0
Views
145
Activity
Dec ’25
NonLiDAR Photogrammetry
Hello! In iOS1.7.5, photogrammetry sessions cannot be performed on iPhones without LiDAR, but I don't think there is much difference in GPU performance between those with and without LiDAR. For example, the chips installed in the iPhone 14 Pro and iPhone 15 are the same A16 Bionic, and I think the GPU performance is also the same. Despite this, photogrammetry can be performed on the iPhone 14 Pro but not on the iPhone 15. Why is this? In fact, we have confirmed that if you transfer images taken with an iPhone 16 without LiDAR to an iPhone 16 Pro and run a photogrammetry session using those images, a 3D model can be generated. Also, will photogrammetry be able to be performed on high-performance iPhones without LiDAR in the future?
Replies
0
Boosts
0
Views
93
Activity
Apr ’25
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
Replies
0
Boosts
0
Views
379
Activity
Dec ’25
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
Replies
0
Boosts
0
Views
107
Activity
4d
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
Replies
0
Boosts
0
Views
305
Activity
Dec ’25
How to Get Device Orientation in Background (PiP) Mode?
My app is a camera app that supports Picture-in-Picture (PiP) mode. Normally, when the device rotates, I get the device orientation from iOS and use it to rotate the camera feed so that the preview stays correctly aligned. However, when the app enters PiP mode, it is considered to be in the background, and I can no longer receive orientation updates from the system. As a result, I can’t apply rotation corrections to the camera video in PiP mode. Is there any way to retrieve device orientation while the app is in the background (specifically during PiP mode)? Any guidance would be greatly appreciated. Thank you!
Replies
0
Boosts
0
Views
76
Activity
May ’25
MusicKit broken in simulator with current tools? Can't get token.
Just updated my computer, phone, and dev tools to the latest versions of everything. Now when I run my app in a previously-working simulator (iPhone 16 w. iOS 18.5) I get: Failed retrieving MusicKit tokens: fetching the developer token is not supported in the simulator when running on this version of macOS; please upgrade your Mac to macOS Ventura. Also: <ICCloudServiceStatusMonitor: 0x600003320e60>: Invoking 1 completion handler for MusicKit tokens. error=<ICError.DeveloperTokenFetchingFailed (-8200) "Failed to fetch media token from <AMSMediaTokenService: 0x6000029049a0>." { underlyingErrors: [ <AMSErrorDomain.300 "Token request encoding failed The token request encoder finished with an error." { userInfo: { AMSDescription : "Token request encoding failed", AMSFailureReason : "The token request encoder finished with an error." }; underlyingErrors: [ <AMSErrorDomain.5 "Anisette Failed Platform not supported" { userInfo: { AMSDescription : "Anisette Failed", AMSFailureReason : "Platform not supported" }; Anybody know what gives here? The Ventura message is absurd because I'm on Tahoe 26.1. The same code works on a physical phone running iOS 26.
Replies
0
Boosts
0
Views
317
Activity
Nov ’25
Playing FairPlay encrypted content works fine on ios17 but won't play on ios26
For devices that are still on ios17, playing Fairplay encrypted content still works fine. For devices that I've upgraded to ios26 playing the same content in the same app no longer works. I can advance and see the stream frames by tapping +10 scrubbing so I know that the content is being decrypted but tapping the play button of AVPlayer for an AVPlayerItem now does nothing in ios26. Is this a breaking change or is there a stricter requirement that I now have to implement?
Replies
0
Boosts
0
Views
198
Activity
Oct ’25
How to toggle usb device
When I use IOKit/usb/IOUSBLib to toggle build-in camera, I got an ERROR:ret IOReturn -536870210 How can I resolve it? Can I use IOUSBLib to disable or hide build-in camera? My environment: Model Name: MacBook Pro ProductVersion: 15.5 Model Identifier: MacBookPro15,2 Processor Name: Quad-Core Intel Core i5 Processor Speed: 2.4 GHz Number of Processors: 1 // 禁用/启用USB设备 bool toggleUSBDevice(uint16_t vendorID, uint16_t productID, bool enable) { std::cout << (enable ? "Enabling" : "Disabling") << " USB device with VID: 0x" << std::hex << vendorID << ", PID: 0x" << productID << std::endl; // 创建匹配字典查找指定VID/PID的USB设备 CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName); if (!matchingDict) { std::cerr << "Failed to create USB device matching dictionary." << std::endl; return false; } // 设置VID/PID匹配条件 CFNumberRef vendorIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &vendorID); CFNumberRef productIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &productID); CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), vendorIDRef); CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), productIDRef); CFRelease(vendorIDRef); CFRelease(productIDRef); // 获取匹配的设备迭代器 io_iterator_t deviceIterator; if (IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &deviceIterator) != KERN_SUCCESS) { std::cerr << "Failed to get USB device iterator." << std::endl; CFRelease(matchingDict); return false; } io_service_t usbDevice; bool result = false; int deviceCount = 0; // 遍历所有匹配的设备 while ((usbDevice = IOIteratorNext(deviceIterator)) != IO_OBJECT_NULL) { deviceCount++; // 获取设备路径 char path[1024]; if (IORegistryEntryGetPath(usbDevice, kIOServicePlane, path) == KERN_SUCCESS) { std::cout << "Found device at path: " << path << std::endl; } // 打开设备 IOCFPlugInInterface** plugInInterface = NULL; IOUSBDeviceInterface** deviceInterface = NULL; SInt32 score; IOReturn ret = IOCreatePlugInInterfaceForService( usbDevice, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); if (ret == kIOReturnSuccess && plugInInterface) { ret = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&deviceInterface); (*plugInInterface)->Release(plugInInterface); } if (ret != kIOReturnSuccess) { std::cerr << "Failed to open USB device interface. Error:" << ret << std::endl; IOObjectRelease(usbDevice); continue; } // 禁用/启用设备 if (enable) { // 启用设备 - 重新配置设备 ret = (*deviceInterface)->USBDeviceReEnumerate(deviceInterface, 0); if (ret == kIOReturnSuccess) { std::cout << "Device enabled successfully." << std::endl; result = true; } else { std::cerr << "Failed to enable device. Error: " << ret << std::endl; } } else { // 禁用设备 - 断开设备连接 ret = (*deviceInterface)->USBDeviceClose(deviceInterface); if (ret == kIOReturnSuccess) { std::cout << "Device disabled successfully." << std::endl; result = true; } else { std::cerr << "Failed to disable device. Error: " << ret << std::endl; } } // 关闭设备接口 (*deviceInterface)->Release(deviceInterface); IOObjectRelease(usbDevice); } IOObjectRelease(deviceIterator); if (deviceCount == 0) { std::cerr << "No device found with specified VID/PID." << std::endl; return false; } return result; }
Replies
0
Boosts
0
Views
231
Activity
Jun ’25
Code snippet
part of the .swift file that controla the haptics buzz per each heart beat. however gives a 3-5 seconds delay between actual heart beat and watch haptic/buzz for that heart beat.
Replies
0
Boosts
0
Views
120
Activity
Nov ’25
Can individual Apple Developer accounts stream full tracks with MusicKit?
I have implemented fetching Apple Music preview songs using a Swift framework integrated into a Unity app. My requirement is to fetch full tracks from a user’s Apple Music library and play them inside Unity. To do this, I understand that I need to handle authentication, generate a Developer Token, and then obtain a Music User Token to access the user’s Apple Music content. Currently, I have an Individual Apple Developer account (not Organization). Based on my research, it seems that: With an Individual account, I can implement this functionality and even upload builds to TestFlight for internal testing. However, when releasing the app publicly on the App Store, full-track playback may be restricted for Individual accounts and allowed only for Organization accounts. 👉 Can you confirm if this understanding is correct? 👉 Specifically, is it possible for an Individual account to fetch and play full-length tracks from a subscribed Apple Music user’s library (at least for internal/TestFlight testing)?
Replies
0
Boosts
0
Views
187
Activity
Sep ’25
Access to favorited artists from Music API?
It's been well over a year since Apple added favoriting of artists back to Apple Music (the little star icon on an artist page), but yet I still haven't seen a way to get this data from an authenticated user from Music API. I was expecting to hear something about this during the WWDC, but there have been no announcements that I've caught. Has anyone else heard anything? People assume when they provide access to their Apple Music account that we can actually get to the data in their Apple Music account, and we end up looking a little dumb not being able to get this core data.
Replies
0
Boosts
1
Views
307
Activity
Jul ’25
CarPlay: Voice Conversational Entitlement Details
With the Voice Conversational Entitlement, can a CarPlay app establish a turn-based audio interface that operates in two modes: Speaking mode: Audio Session configured for playback Buffered audio Listening mode: Switch Audio Session to .record or .playAndRecord Activate SFSpeechRecognizer And continue toggling back and forth. The app should listen for responses to questions or other audio cues, and assuming those answers are correct (based on analysis of results from SFSpeechRecognizer), continue this pattern of mode 1 and 2 alternating. This appears to be a valid use of this entitlement. Does this also require the Audio App Entitlement, or is the Voice Conversational Entitlement sufficient? Are there other obstacles to this type of app that I'm not seeing? Or perhaps this is technically possible, but unlikely to pass app store review?
Replies
0
Boosts
0
Views
85
Activity
3d
Async AVAudioPlayerNode.scheduleBuffer stutters
My code that streams buffers into AVAudioPlayerNode is stuttering when the buffer is finished and before the next one is played. while engine.isRunning { let framesToCopy = min(buffer.frameLength - framePosition, Self.BufferSize) let srcRaw = UnsafeRawPointer(srcPtr) let playbackBuffer = AVAudioPCMBuffer(pcmFormat: buffer.format, frameCapacity: Self.BufferSize)! let playbackPtr = playbackBuffer.floatChannelData![0] let destRaw = UnsafeMutableRawPointer(mutating: playbackPtr) memcpy(destRaw, srcRaw, Int(framesToCopy) * MemoryLayout<Float>.stride) srcPtr = srcPtr.advanced(by: Int(framesToCopy)) playbackBuffer.frameLength = framesToCopy await player.scheduleBuffer(playbackBuffer, at: nil, options: [], completionCallbackType: .dataRendered) } I've tried to schedule multiple buffers at once using a combination of both the synchronous and async versions of scheduleBuffer because I thought the delay might be but it still stutters and the data copied into the playbackBuffer matches the source buffer. I've tried all combinations of options and completionCallbackType but no luck. I've tried increasing the buffer size but that just spaces out the stutters because the buffer is larger. What am I missing about this API?
Replies
0
Boosts
0
Views
75
Activity
Feb ’26