Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

CarPlay: Voice Conversational Entitlement Details
With the Voice Conversational Entitlement, can a CarPlay app establish a turn-based audio interface that operates in two modes: Speaking mode: Audio Session configured for playback Buffered audio Listening mode: Switch Audio Session to .record or .playAndRecord Activate SFSpeechRecognizer And continue toggling back and forth. The app should listen for responses to questions or other audio cues, and assuming those answers are correct (based on analysis of results from SFSpeechRecognizer), continue this pattern of mode 1 and 2 alternating. This appears to be a valid use of this entitlement. Does this also require the Audio App Entitlement, or is the Voice Conversational Entitlement sufficient? Are there other obstacles to this type of app that I'm not seeing? Or perhaps this is technically possible, but unlikely to pass app store review?
0
0
13
5h
AVPictureInPictureController with AVSampleBufferDisplayLayer: Video not scaled in PiP window on macOS
AVPictureInPictureController with AVSampleBufferDisplayLayer: Video not scaled in PiP window on macOS Platform: macOS 26.4 (Tahoe) Framework: AVKit / AVFoundation Xcode: 26.4 Summary When using AVPictureInPictureController with ContentSource(sampleBufferDisplayLayer:playbackDelegate:) on macOS, the video content in the PiP window is not scaled to fit — it renders at 1:1 pixel resolution, showing only the bottom-left portion of the video (zoomed/cropped). The same code works correctly on iOS. Setup let displayLayer = AVSampleBufferDisplayLayer() displayLayer.videoGravity = .resizeAspect // Host displayLayer as a sublayer of an NSView, enqueue CMSampleBuffers let source = AVPictureInPictureController.ContentSource( sampleBufferDisplayLayer: displayLayer, playbackDelegate: self ) let pip = AVPictureInPictureController(contentSource: source) pip.delegate = self The source display layer is 1280×720, matching the video stream resolution. PiP starts successfully — isPictureInPicturePossible is true, the PiP button works, and the PIPPanel window appears. However, the video in the PiP window (~480×270) shows only the bottom-left 480×270 pixels of the 1280×720 content, rather than scaling the full frame to fit. Investigation Inspecting the PiP window hierarchy reveals: PIPPanel (480×270) └─ AVPictureInPictureSampleBufferDisplayLayerView └─ AVPictureInPictureSampleBufferDisplayLayerHostView (layer = CALayerHost) └─ AVPictureInPictureCALayerHostView The CALayerHost mirrors the source AVSampleBufferDisplayLayer at 1:1 pixel resolution. Unlike AVPlayerLayer-based PiP (which works correctly on macOS), the sample buffer display layer path does not apply any scaling transform to the mirrored content. On iOS, PiP with AVSampleBufferDisplayLayer works correctly because the system reparents the layer into the PiP window, so standard layer scaling applies. On macOS, the system uses CALayerHost mirroring instead, and the scaling step is missing. What I tried (none fix the issue) Setting autoresizingMask on all PiP internal subviews — views resize correctly, but CALayerHost content remains at 1:1 pixel scale Applying CATransform3DMakeScale on the CALayerHost layer — creates a black rectangle artifact; the mirrored content does not transform Setting CALayerHost.bounds to the source layer size — no effect on rendering Reparenting the internal AVPictureInPictureCALayerHostView out of the host view — video disappears entirely Hiding the CALayerHost — PiP window goes white (confirming it is the sole video renderer) Resizing the source AVSampleBufferDisplayLayer to match the PiP window size — partially works (1:1 mirror of a smaller source fits), but causes visible lag during resize, affects the main window's "This video is playing in Picture in Picture" placeholder, and didTransitionToRenderSize stops being called after the initial resize Expected behavior The video content should be scaled to fit the PiP window, respecting the display layer's videoGravity setting (.resizeAspect), consistent with: iOS PiP with AVSampleBufferDisplayLayer (works correctly) macOS PiP with AVPlayerLayer (works correctly) Environment macOS 26.4 (Tahoe) Xcode 26.4 Apple Silicon (M-series) Retina display (contentsScale = 2.0) Video: H.264 1280×720, hardware decoded via VTDecompressionSession, enqueued as CMSampleBuffer
3
0
67
1d
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
0
0
47
1d
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
0
0
39
2d
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
0
0
27
2d
DJI Osmo Mobile 8 — DockKit motor control APIs not working (setAngularVelocity, setOrientation)
I'm developing an iOS app that uses Apple's DockKit framework to control gimbals. I've tested with the Insta360 Flow 2 Pro and the DJI Osmo Mobile 8. The Flow 2 Pro supports all DockKit motor control APIs — setAngularVelocity, setOrientation, setLimits — which lets my app do manual pan/tilt control via a virtual joystick. The Osmo Mobile 8 (model DS308, firmware 1.0.0) connects fine via DockKit and reports as docked, but every motor control API fails with "The device doesn't support the requested operation": setAngularVelocity — fails setOrientation(relative: true) — fails setLimits — fails The only thing that works is Apple's system tracking (setSystemTrackingEnabled(true)) for automatic face/body following. This means there's no way for third-party apps to do manual gimbal control (pan/tilt via joystick) on the Osmo 8 through DockKit — only automatic tracking works. Questions: Is anyone else seeing the same limitation with the Osmo 8 and DockKit? Has DJI confirmed whether manual motor control via DockKit is intentionally unsupported, or is this a firmware issue that might be addressed in an update? Does the DJI Mimo app use DockKit for its tracking, or does it use a proprietary Bluetooth protocol? Running iOS 26.4 on iPhone 15 Pro. Happy to share more technical details if helpful.
0
0
57
2d
Does the OS has dedicated volume levels for each AVAudioSessionCategory.
We have an VoiP application, our application can be configured to amplify the PCM samples before feeding it to the Player to achieve volume gain at the receiver. In order to support this, We follow as below. If User has configured this Gain Settings within application, Application applies the amplification for the samples to introduce the gain. Application will also set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback Provided the User has chosen the output to Speaker. This settings was working for us but we see there is a difference in behaviour w.r.t Volume Level System Settings between OS 26.3.1 and OS 26.4 When user has chosen earpiece as Output, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayAndRecord. User would have set the volume level to minimum. When user will change the output to Speaker, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback. The expectation is, the volume level should be of AVAudioSessionCategoryPlayback what was set earlier instead we are seeing the volume level stays as minimum which was set to AVAudioSessionCategoryPlayAndRecord Could you please explain about this inconsistency w.r.t Volume level.
4
0
705
3d
Is 18MP Front Camera Capture Available to Third-Party Apps via AVFoundation?
Hi, I'm investigating whether 18MP photo capture from the front camera on iPhone 17 Pro is available to third-party apps using AVFoundation. I first inspected all available AVCaptureDevice formats, but I could not find any format corresponding to ~18MP resolution (e.g., around 4896×3672). for format in device.formats { let desc = format.formatDescription let dims = CMVideoFormatDescriptionGetDimensions(desc) print("Format: (dims.width) x (dims.height)") } All reported formats appear to be limited to resolutions such as 4032×3024 (12MP) or below. Question: Is 18MP front camera capture actually available to third-party apps via AVFoundation on iPhone 17?
1
0
291
3d
10-Bit UVC on iPadOS
Hello, I've been very familiar with the UVC Support in iPadOS ever since it launched in iOS 17. There are a number of people that use the software I've developed built around UVC and there are often queries about 8-Bit vs. 10-Bit. My understanding is that the newest UVC Spec is 1.5 which was standardised in 2012 and almost every UVC Capture Card runs at 8-Bit. The only 10-Bit Capture Card that is on my radar is the AJA U-Tap SDI, however it looks like this is 10-Bit up until the UVC Part where the 10-Bit Input is downsampled to 8-Bit. Though I have read in certain places that it works as a 10-Bit Capture Card on macOS but not on iPadOS. I was just wondering if 10-Bit via UVC is even possible on iPadOS? If there was indeed a true 10-Bit Source being passed into an iPad, would iPadOS allow it or would it be downsampled by AVFoundation so it can show up as a valid external video input? All USB Capture Cards that I have encountered use one of the following formats: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange kCVPixelFormatType_420YpCbCr8BiPlanarFullRange kCVPixelFormatType_32BGRA So if a UVC Device delivered a 10-Bit Format, would that be accessible by iPadOS or would it fallback to these 8-Bit Formats by default? Thanks!
1
0
505
4d
CarPlay outputs no audio
I have an application that includes custom artwork for the album cover and text details setup with the MPRemoteCommandCenter.shared() reference. I need the user to have a full featured "now playing" display to see all of this. My experience is that cannot find a set of parameters for AVAudioSession.setCategory() that route audio successfully, and display the full featured now playing deck. If I use .playAndRecord, the audio I send out plays out on the radio. But, the now-playing deck is empty and nothing I do with the command center seems to change that. If I instead use .playback, I cannot use .defaultToSpeaker option which is the only way I've found to cause the "now-playing" navigation button to appear so that the full featured deck will display. But, of course setCategory() fails with an error about .defaultToSpeaker only available with .playAndRecord, so some default or intermediate state is entered and I see the full featured deck, but no audio goes out to the radio. What combination is supposed to be used here and is this more likely a problem with thread use (@MainActor) and/or some ordering of operations that I've overlooked?
0
0
63
4d
Is iTunesTagging no longer support?
I'm currently trying to develope ipod control function on IVI for vehicle. From previous experience I remember we need to implement iTunetagging, but since I can't find it in Accessory Firmware Specification R46, I'm wondering whether iTunesTagging is no longer support. Thanks in advance for you support!
0
0
19
4d
AVKit crash when rendering AVPlayerView controls — macOS 26.4 regression
Description Our app, Octory, allows users to create onboarding and communication workflows composed of slides containing various UI components, including embedded video players powered by AVPlayerView. Since macOS 26.4 Beta, the app crashes at launch whenever a workflow contains a video component. Workflows without video components load and render without issue, which points to a regression in AVKit's player control rendering pipeline. As anyone seen similar behaviour when using AVKit or is it something we do not do properly? Expected Behavior The app opens and renders the workflow, including the embedded video component. Actual Behavior The app briefly launches and immediately crashes with SIGABRT on the main thread. Crash Analysis Key takeaways for anyone investigating: Root cause — abort() inside NSImageSymbolConfiguration The crash occurs entirely within Apple frameworks, with no third-party code in the faulting call chain (aside from the app's entry point). The sequence is: AVPlayerItem finishes loading and fires a KVO notification that it is ready to play (_updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:) This KVO chain propagates through NSKeyValueDidChange up to AVPlayerView, which calls _updateVideoGravityType AVPlayerControlsViewController responds by calling _updateZoomButtonImage, which asks AVPlayerControlsConfigurator for a configured SF Symbol via +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] The symbol rendering hits -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:], which calls abort() The entire crash stack is in AppKit (NSImage / NSImageSymbolConfiguration) and AVKit — no application code is involved in the faulting path. The abort() suggests a precondition or assertion failure inside the symbol configuration logic, possibly due to an invalid or nil font/text style being passed during the zoom button image setup. This is triggered automatically the moment an AVPlayerItem becomes ready to play and AVKit attempts to render its transport controls. There is no way for the application to intercept or work around this. Relevant stack frames (Thread 0 — main thread) 3 AppKit -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:] + 440 ← abort() here 4 AppKit -[NSImageSymbolRepProvider _bestRepresentationForImage:hints:] + 404 11 AVKit +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] + 332 12 AVKit -[AVPlayerControlsConfigurator configuredSymbolForImageName:] + 92 13 AVKit -[AVPlayerControlsViewController _updateZoomButtonImage] + 160 14 AVKit -[AVPlayerControlsViewController setVideoGravityType:] + 52 15 AVKit -[AVPlayerView _updateVideoGravityType] + 1056 28 AVFCore -[AVPlayerItem didChangeValueForKey:] + 56 29 AVFCore -[AVPlayerItem _updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:updateStatusToReadyToPlay:] + 660 Additional Notes Removing the video component from the workflow (i.e. not instantiating AVPlayerView) resolves the crash entirely. The crash is 100% reproducible on every launch. This behavior was not present on macOS 26.3 or any prior release. The app was not recompiled — the same binary that works on 26.3 crashes on 26.4 Beta & 26.4. Environment Detail Value OS macOS 26.4 Hardware MacBook Pro M1 (MacBookPro17,1)
2
0
97
5d
ApplicationMusicPlayer.shared player.play() permission denied in app sandbox (Tauri)
Hi, I'm developing a Tauri V2 app on MacOS, and am wanting to implement playback controls. It seems that Apple locks down playback, requiring a signed application. My app also has capabilities to "get currently playing track", and I confirmed this works; Apple produces a popup triggered by my await MusicAuthorization.request() call. It returns nil, of course, because I can't get anything to play via the ApplicationMusicPlayer; only through the system's Apple Music app. I understand SystemMusicPlayer is not available on MacOS, which is fine. I'm just a little confused as it seems pretty standard to need to test playback controls quickly without having to codesign and do some provisionprofile embedding acrobatics each time Rust re-compiles target/debug. This slows down development a lot. I do have these entries in my Entitlements.plist: <key>com.apple.security.personal-information.media-library</key> <true/> <key>com.apple.developer.music-kit</key> <true/> <key>com.apple.security.app-sandbox</key> <true/> In my tauri.conf.json, I have: "macOS": { "entitlements": "./Entitlements.plist", "signingIdentity": "Apple Development: ()" } My application works like this: I have a temporary button click to fire off a tauriinvoke() command which goes to a #tauri::command, which bridges to Swift code. Again, I validated that my less-permissive "get currently playing track" works; i.e., does not get permission denied. exact error message: [swift] playMedia error: .permissionDenied (^specifically, ".permissionDenied") My code to trigger playback of a specific media item: Task { print("[swift] entered sema Task") let status: MusicAuthorization.Status = await MusicAuthorization.request() print("auth status: \(status)") guard status == .authorized else { sema.signal(); return } print("passed the status guard.") do { var request = MusicCatalogResourceRequest<Song>(matching: \.id, equalTo: MusicItemID(rawValue: songId)) request.limit = 1 let response = try await request.response() guard let song = response.items.first else { sema.signal(); return } let player = ApplicationMusicPlayer.shared player.queue = [song] try await player.play() success = true } catch { print("[swift] playMedia error: \(error)") } sema.signal()
3
0
424
6d
I am sending cdn token and i am gettiing error from apple tv TVOS
The Url that i am using to play content in AVPlayer https://vodc.dp.sooka.my/wmt:eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE3NzIzODUwMzIsImlzcyI6IlZSIiwiZXhwIjoxNzcyNDEzODMyLCJ3bXZlciI6Mywid21pZGZtdCI6ImFzY2lpIiwid21pZHR5cCI6MSwid21rZXl2ZXIiOjMsIndtdG1pZHZlciI6NCwid21pZGxlbiI6NTEyLCJ3bW9waWQiOjMyLCJ3bWlkIjoiMGY1OWU0YjctNTAyMC00ZDU3LWE2ZTktNzJhNzZmY2U3ZTdlIiwiZmlsdGVyIjoiKHR5cGU9PVwidmlkZW9cIiYmRGlzcGxheUhlaWdodDw9MjE2MCl8fCh0eXBlPT1cImF1ZGlvXCImJmZvdXJDQyE9XCJhYy0zXCIpfHwodHlwZSE9XCJ2aWRlb1wiJiZ0eXBlIT1cImF1ZGlvXCIpIiwicGF0dGVybiI6IjJhMmEyZDQyZGY5ZmQ5MGE1MTgzMDllYTE1MTE1YTc2LTczYTU2ZTY2NjY1NTQ5MjgwZTAwLTEwIn0.FSgRrQeFHLhmrBuDFsMKZGFh4eUrCk9PgTxIyFTP8yk/2a2a2d42df9fd90a518309ea15115a76-73a56e66665549280e00-10/2a2a2d42df9fd90a518309ea15115a76-73a56e66665549280e00-10/index.m3u8 I am getting below error : "timestamp":1772385202.085278,"message":"Playback failed. unsupported URL","data":{"message":"unsupported URL","code":-1002,"underlyingError":{"domain":"NSURLErrorDomain","description":"Error Domain=NSURLErrorDomain Code=-1002 \"unsupported URL\" UserInfo={NSLocalizedDescription=unsupported URL, NSUnderlyingError=0x301f08870 {Error Domain=kCFErrorDomainCFNetwork Code=-1002 \"unsupported URL\" Please help me what is the reason of this error
2
0
212
1w
ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS
Bug Report: ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS Summary When using ScreenCaptureKit to capture system audio for extended periods, the application crashes with EXC_BAD_ACCESS in Swift's error handling runtime. The crash occurs in swift_getErrorValue when trying to process an error from the SCStream delegate method didStopWithError. This appears to be a framework-level issue in ScreenCaptureKit or its underlying ReplayKit implementation. Environment macOS Sonoma 14.6.1 Swift 5.8 ScreenCaptureKit framework Detailed Description Our application captures system audio using ScreenCaptureKit's audio capture capabilities. After successfully capturing for several minutes (typically after 3-4 segments of 60-second recordings), the application crashes with an EXC_BAD_ACCESS error. The crash happens when the Swift runtime attempts to process an error in the SCStreamDelegate.stream(_:didStopWithError:) method. The crash consistently occurs in swift_getErrorValue when attempting to access the class of what appears to be a null object. This suggests that the error being passed from the system framework to our delegate method is malformed or contains invalid memory. Steps to Reproduce Create an SCStream with audio capture enabled Add audio output to the stream Start capture and write audio data to disk Allow the capture to run for several minutes (3-5 minutes typically triggers the issue) The app will crash with EXC_BAD_ACCESS in swift_getErrorValue Code Sample func stream(_ stream: SCStream, didStopWithError error: Error) { print("Stream stopped with error: \(error)") // Crash occurs before this line executes } func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) { guard type == .audio, sampleBuffer.isValid else { return } // Process audio data... } Expected Behavior The error should be properly propagated to the delegate method, allowing for graceful error handling and recovery. Actual Behavior The application crashes with EXC_BAD_ACCESS when the Swift runtime attempts to process the error in swift_getErrorValue. Crash Log Details Thread #35, queue = 'com.apple.NSXPCConnection.m-user.com.apple.replayd', stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x0000000194c3088c libswiftCore.dylib`swift::_swift_getClass(void const*) + 8 frame #1: 0x0000000194c30104 libswiftCore.dylib`swift_getErrorValue + 40 frame #2: 0x00000001057fba30 shadow`NewScreenCaptureService.stream(stream=0x0000600002de6700, error=Swift.Error @ 0x000000016b7b5e30) at NEW+ScreenCaptureService.swift:365:15 frame #3: 0x00000001057fc050 shadow`@objc NewScreenCaptureService.stream(_:didStopWithError:) at <compiler-generated>:0 frame #4: 0x0000000219ec5ca0 ScreenCaptureKit`-[SCStreamManager stream:didStopWithError:] + 456 frame #5: 0x00000001ca68a5cc ReplayKit`-[RPScreenRecorder stream:didStopWithError:] + 84 frame #6: 0x00000001ca696ff8 ReplayKit`-[RPDaemonProxy stream:didStopWithError:] + 224 Printing description of stream._streamQueue: error: ObjectiveC.id:4294967281:18: note: 'id' has been explicitly marked unavailable here public typealias id = AnyObject ^ error: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:1:65: 'id' is unavailable in Swift: 'id' is not available in Swift; use 'Any' Swift._DebuggerSupport.stringForPrintObject(Swift.UnsafePointer<id>(bitPattern: 0x104ae08c0)!.pointee) ^~ ObjectiveC.id:2:18: note: 'id' has been explicitly marked unavailable here public typealias id = AnyObject ^ warning: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:5:7: initialization of variable '$__lldb_error_result' was never used; consider replacing with assignment to '_' or removing it var $__lldb_error_result = __lldb_tmp_error ~~~~^~~~~~~~~~~~~~~~~~~~ _ Before the crash, we observed this error message in the console: [ERROR] *****SCStream*****RemoteAudioQueueOperationHandlerWithError:1015 Error received from the remote queue -16665 Additional Context The issue occurs consistently after approximately 3-4 successful audio segment recordings of 60 seconds each Commenting out custom segment rotation logic does not prevent the crash The crash involves XPC communication with Apple's ReplayKit daemon The error appears to be corrupted or malformed when crossing the XPC boundary Workarounds Attempted Added proper thread safety for all published properties using DispatchQueue.main.async Implemented more robust error handling in the delegate methods None of these approaches prevented the crash since it occurs at the Swift runtime level before our code executes. Impact This issue prevents reliable long-duration audio capture using ScreenCaptureKit. This bug significantly limits the usefulness of ScreenCaptureKit for any application requiring continuous system audio capture for more than a few minutes. Perhaps this issue might be related to a macOS bug where the system dialog indicates that the screen is being shared, even though nothing is actually being shared. Moreover, when attempting to stop sharing, nothing happens.
3
0
813
1w
Mixing ScreenCaptureKit audio with microphone audio
Hi, I'm new to AVAudioEngine(and macOS programming in general). I'm trying to mix microphone audio with ScreenCaptureKit audio using AVAudioEngine without playing it back. I've created a AVAudioPlayerNode and scheduling buffers in my SCStream handler: playerNode.scheduleBuffer(samples) and have connected the playerNode to the mainMixerNode. audioEngine.connect(audioEngine.inputNode, to: audioEngine.mainMixerNode, format: micFormat) audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: format) The problem is that mainMixerNode plays the audio to the speaker creating a feedback loop. How can I prevent the mixer output from being played back. Also: Is this the best way of mixing microphone input with some other input? I ran into AVAudioEngine's manual rendering mode, which seems like the way to go for mixing audio without playing it back. However, I couldn't figure out how to connect microphone input to the AVAudioEngine in manual rendering mode?
1
0
1.2k
1w
The audio of FairPlay protected content can be captured - Safari on iOS
Hi, Has anyone been able to protect the audio part of FairPlay protected content from being captured as part of screen recording on Safari/iOS (PWA and/or online web app)? We have tried many things but could not prevent the audio from being recorded. Same app and content on Safari/Mac does not allow audio to be recorded. Any tips?
1
0
187
1w
SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
So, I've been wondering how fast a an offline STT -> ML Prompt -> TTS roundtrip would be. Interestingly, for many tests, the SpeechTranscriber (STT) takes the bulk of the time, compared to generating a FoundationModel response and creating the Audio using TTS. E.g. InteractionStatistics: - listeningStarted: 21:24:23 4480 2423 - timeTillFirstAboveNoiseFloor: 01.794 - timeTillLastNoiseAboveFloor: 02.383 - timeTillFirstSpeechDetected: 02.399 - timeTillTranscriptFinalized: 04.510 - timeTillFirstMLModelResponse: 04.938 - timeTillMLModelResponse: 05.379 - timeTillTTSStarted: 04.962 - timeTillTTSFinished: 11.016 - speechLength: 06.054 - timeToResponse: 02.578 - transcript: This is a test. - mlModelResponse: Sure! I'm ready to help with your test. What do you need help with? Here, between my audio input ending and the Text-2-Speech starting top play (using AVSpeechUtterance) the total response time was 2.5s. Of that time, it took the SpeechAnalyzer 2.1s to get the transcript finalized, FoundationModel only took 0.4s to respond (and TTS started playing nearly instantly). I'm already using reportingOptions: [.volatileResults, .fastResults] so it's probably as fast as possible right now? I'm just surprised the STT takes so much longer compared to the other parts (all being CoreML based, aren't they?)
3
0
723
1w
CarPlay: Voice Conversational Entitlement Details
With the Voice Conversational Entitlement, can a CarPlay app establish a turn-based audio interface that operates in two modes: Speaking mode: Audio Session configured for playback Buffered audio Listening mode: Switch Audio Session to .record or .playAndRecord Activate SFSpeechRecognizer And continue toggling back and forth. The app should listen for responses to questions or other audio cues, and assuming those answers are correct (based on analysis of results from SFSpeechRecognizer), continue this pattern of mode 1 and 2 alternating. This appears to be a valid use of this entitlement. Does this also require the Audio App Entitlement, or is the Voice Conversational Entitlement sufficient? Are there other obstacles to this type of app that I'm not seeing? Or perhaps this is technically possible, but unlikely to pass app store review?
Replies
0
Boosts
0
Views
13
Activity
5h
AVPictureInPictureController with AVSampleBufferDisplayLayer: Video not scaled in PiP window on macOS
AVPictureInPictureController with AVSampleBufferDisplayLayer: Video not scaled in PiP window on macOS Platform: macOS 26.4 (Tahoe) Framework: AVKit / AVFoundation Xcode: 26.4 Summary When using AVPictureInPictureController with ContentSource(sampleBufferDisplayLayer:playbackDelegate:) on macOS, the video content in the PiP window is not scaled to fit — it renders at 1:1 pixel resolution, showing only the bottom-left portion of the video (zoomed/cropped). The same code works correctly on iOS. Setup let displayLayer = AVSampleBufferDisplayLayer() displayLayer.videoGravity = .resizeAspect // Host displayLayer as a sublayer of an NSView, enqueue CMSampleBuffers let source = AVPictureInPictureController.ContentSource( sampleBufferDisplayLayer: displayLayer, playbackDelegate: self ) let pip = AVPictureInPictureController(contentSource: source) pip.delegate = self The source display layer is 1280×720, matching the video stream resolution. PiP starts successfully — isPictureInPicturePossible is true, the PiP button works, and the PIPPanel window appears. However, the video in the PiP window (~480×270) shows only the bottom-left 480×270 pixels of the 1280×720 content, rather than scaling the full frame to fit. Investigation Inspecting the PiP window hierarchy reveals: PIPPanel (480×270) └─ AVPictureInPictureSampleBufferDisplayLayerView └─ AVPictureInPictureSampleBufferDisplayLayerHostView (layer = CALayerHost) └─ AVPictureInPictureCALayerHostView The CALayerHost mirrors the source AVSampleBufferDisplayLayer at 1:1 pixel resolution. Unlike AVPlayerLayer-based PiP (which works correctly on macOS), the sample buffer display layer path does not apply any scaling transform to the mirrored content. On iOS, PiP with AVSampleBufferDisplayLayer works correctly because the system reparents the layer into the PiP window, so standard layer scaling applies. On macOS, the system uses CALayerHost mirroring instead, and the scaling step is missing. What I tried (none fix the issue) Setting autoresizingMask on all PiP internal subviews — views resize correctly, but CALayerHost content remains at 1:1 pixel scale Applying CATransform3DMakeScale on the CALayerHost layer — creates a black rectangle artifact; the mirrored content does not transform Setting CALayerHost.bounds to the source layer size — no effect on rendering Reparenting the internal AVPictureInPictureCALayerHostView out of the host view — video disappears entirely Hiding the CALayerHost — PiP window goes white (confirming it is the sole video renderer) Resizing the source AVSampleBufferDisplayLayer to match the PiP window size — partially works (1:1 mirror of a smaller source fits), but causes visible lag during resize, affects the main window's "This video is playing in Picture in Picture" placeholder, and didTransitionToRenderSize stops being called after the initial resize Expected behavior The video content should be scaled to fit the PiP window, respecting the display layer's videoGravity setting (.resizeAspect), consistent with: iOS PiP with AVSampleBufferDisplayLayer (works correctly) macOS PiP with AVPlayerLayer (works correctly) Environment macOS 26.4 (Tahoe) Xcode 26.4 Apple Silicon (M-series) Retina display (contentsScale = 2.0) Video: H.264 1280×720, hardware decoded via VTDecompressionSession, enqueued as CMSampleBuffer
Replies
3
Boosts
0
Views
67
Activity
1d
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://aninterestingwebsite.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
Replies
0
Boosts
0
Views
47
Activity
1d
After upgrade to iOS 26.4, averagePowerLevel and peakHoldLevel are stuck -120
We have an application that capture audio and video. App captures audio PCM on internal or external microphone and displays audio level on the screen. App was working fine for many years but after iOS 26.4 upgrade, averagePowerLevel and peakHoldLevel are stuck to -120 values. Any suggestion?
Replies
4
Boosts
3
Views
323
Activity
1d
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
Replies
0
Boosts
0
Views
39
Activity
2d
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
Replies
0
Boosts
0
Views
27
Activity
2d
DJI Osmo Mobile 8 — DockKit motor control APIs not working (setAngularVelocity, setOrientation)
I'm developing an iOS app that uses Apple's DockKit framework to control gimbals. I've tested with the Insta360 Flow 2 Pro and the DJI Osmo Mobile 8. The Flow 2 Pro supports all DockKit motor control APIs — setAngularVelocity, setOrientation, setLimits — which lets my app do manual pan/tilt control via a virtual joystick. The Osmo Mobile 8 (model DS308, firmware 1.0.0) connects fine via DockKit and reports as docked, but every motor control API fails with "The device doesn't support the requested operation": setAngularVelocity — fails setOrientation(relative: true) — fails setLimits — fails The only thing that works is Apple's system tracking (setSystemTrackingEnabled(true)) for automatic face/body following. This means there's no way for third-party apps to do manual gimbal control (pan/tilt via joystick) on the Osmo 8 through DockKit — only automatic tracking works. Questions: Is anyone else seeing the same limitation with the Osmo 8 and DockKit? Has DJI confirmed whether manual motor control via DockKit is intentionally unsupported, or is this a firmware issue that might be addressed in an update? Does the DJI Mimo app use DockKit for its tracking, or does it use a proprietary Bluetooth protocol? Running iOS 26.4 on iPhone 15 Pro. Happy to share more technical details if helpful.
Replies
0
Boosts
0
Views
57
Activity
2d
Does the OS has dedicated volume levels for each AVAudioSessionCategory.
We have an VoiP application, our application can be configured to amplify the PCM samples before feeding it to the Player to achieve volume gain at the receiver. In order to support this, We follow as below. If User has configured this Gain Settings within application, Application applies the amplification for the samples to introduce the gain. Application will also set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback Provided the User has chosen the output to Speaker. This settings was working for us but we see there is a difference in behaviour w.r.t Volume Level System Settings between OS 26.3.1 and OS 26.4 When user has chosen earpiece as Output, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayAndRecord. User would have set the volume level to minimum. When user will change the output to Speaker, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback. The expectation is, the volume level should be of AVAudioSessionCategoryPlayback what was set earlier instead we are seeing the volume level stays as minimum which was set to AVAudioSessionCategoryPlayAndRecord Could you please explain about this inconsistency w.r.t Volume level.
Replies
4
Boosts
0
Views
705
Activity
3d
Is 18MP Front Camera Capture Available to Third-Party Apps via AVFoundation?
Hi, I'm investigating whether 18MP photo capture from the front camera on iPhone 17 Pro is available to third-party apps using AVFoundation. I first inspected all available AVCaptureDevice formats, but I could not find any format corresponding to ~18MP resolution (e.g., around 4896×3672). for format in device.formats { let desc = format.formatDescription let dims = CMVideoFormatDescriptionGetDimensions(desc) print("Format: (dims.width) x (dims.height)") } All reported formats appear to be limited to resolutions such as 4032×3024 (12MP) or below. Question: Is 18MP front camera capture actually available to third-party apps via AVFoundation on iPhone 17?
Replies
1
Boosts
0
Views
291
Activity
3d
10-Bit UVC on iPadOS
Hello, I've been very familiar with the UVC Support in iPadOS ever since it launched in iOS 17. There are a number of people that use the software I've developed built around UVC and there are often queries about 8-Bit vs. 10-Bit. My understanding is that the newest UVC Spec is 1.5 which was standardised in 2012 and almost every UVC Capture Card runs at 8-Bit. The only 10-Bit Capture Card that is on my radar is the AJA U-Tap SDI, however it looks like this is 10-Bit up until the UVC Part where the 10-Bit Input is downsampled to 8-Bit. Though I have read in certain places that it works as a 10-Bit Capture Card on macOS but not on iPadOS. I was just wondering if 10-Bit via UVC is even possible on iPadOS? If there was indeed a true 10-Bit Source being passed into an iPad, would iPadOS allow it or would it be downsampled by AVFoundation so it can show up as a valid external video input? All USB Capture Cards that I have encountered use one of the following formats: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange kCVPixelFormatType_420YpCbCr8BiPlanarFullRange kCVPixelFormatType_32BGRA So if a UVC Device delivered a 10-Bit Format, would that be accessible by iPadOS or would it fallback to these 8-Bit Formats by default? Thanks!
Replies
1
Boosts
0
Views
505
Activity
4d
CarPlay outputs no audio
I have an application that includes custom artwork for the album cover and text details setup with the MPRemoteCommandCenter.shared() reference. I need the user to have a full featured "now playing" display to see all of this. My experience is that cannot find a set of parameters for AVAudioSession.setCategory() that route audio successfully, and display the full featured now playing deck. If I use .playAndRecord, the audio I send out plays out on the radio. But, the now-playing deck is empty and nothing I do with the command center seems to change that. If I instead use .playback, I cannot use .defaultToSpeaker option which is the only way I've found to cause the "now-playing" navigation button to appear so that the full featured deck will display. But, of course setCategory() fails with an error about .defaultToSpeaker only available with .playAndRecord, so some default or intermediate state is entered and I see the full featured deck, but no audio goes out to the radio. What combination is supposed to be used here and is this more likely a problem with thread use (@MainActor) and/or some ordering of operations that I've overlooked?
Replies
0
Boosts
0
Views
63
Activity
4d
Is iTunesTagging no longer support?
I'm currently trying to develope ipod control function on IVI for vehicle. From previous experience I remember we need to implement iTunetagging, but since I can't find it in Accessory Firmware Specification R46, I'm wondering whether iTunesTagging is no longer support. Thanks in advance for you support!
Replies
0
Boosts
0
Views
19
Activity
4d
AVKit crash when rendering AVPlayerView controls — macOS 26.4 regression
Description Our app, Octory, allows users to create onboarding and communication workflows composed of slides containing various UI components, including embedded video players powered by AVPlayerView. Since macOS 26.4 Beta, the app crashes at launch whenever a workflow contains a video component. Workflows without video components load and render without issue, which points to a regression in AVKit's player control rendering pipeline. As anyone seen similar behaviour when using AVKit or is it something we do not do properly? Expected Behavior The app opens and renders the workflow, including the embedded video component. Actual Behavior The app briefly launches and immediately crashes with SIGABRT on the main thread. Crash Analysis Key takeaways for anyone investigating: Root cause — abort() inside NSImageSymbolConfiguration The crash occurs entirely within Apple frameworks, with no third-party code in the faulting call chain (aside from the app's entry point). The sequence is: AVPlayerItem finishes loading and fires a KVO notification that it is ready to play (_updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:) This KVO chain propagates through NSKeyValueDidChange up to AVPlayerView, which calls _updateVideoGravityType AVPlayerControlsViewController responds by calling _updateZoomButtonImage, which asks AVPlayerControlsConfigurator for a configured SF Symbol via +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] The symbol rendering hits -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:], which calls abort() The entire crash stack is in AppKit (NSImage / NSImageSymbolConfiguration) and AVKit — no application code is involved in the faulting path. The abort() suggests a precondition or assertion failure inside the symbol configuration logic, possibly due to an invalid or nil font/text style being passed during the zoom button image setup. This is triggered automatically the moment an AVPlayerItem becomes ready to play and AVKit attempts to render its transport controls. There is no way for the application to intercept or work around this. Relevant stack frames (Thread 0 — main thread) 3 AppKit -[NSImageSymbolConfiguration _getEffectivePointSize:glyphWeight:glyphSize:backfilledWithFont:scale:] + 440 ← abort() here 4 AppKit -[NSImageSymbolRepProvider _bestRepresentationForImage:hints:] + 404 11 AVKit +[NSImage(AVAdditions) avkit_imageWithSymbolName:textStyle:scale:accessibilityDescription:] + 332 12 AVKit -[AVPlayerControlsConfigurator configuredSymbolForImageName:] + 92 13 AVKit -[AVPlayerControlsViewController _updateZoomButtonImage] + 160 14 AVKit -[AVPlayerControlsViewController setVideoGravityType:] + 52 15 AVKit -[AVPlayerView _updateVideoGravityType] + 1056 28 AVFCore -[AVPlayerItem didChangeValueForKey:] + 56 29 AVFCore -[AVPlayerItem _updateCanPlayAndCanStepPropertiesWhenReadyToPlayWithNotificationPayload:updateStatusToReadyToPlay:] + 660 Additional Notes Removing the video component from the workflow (i.e. not instantiating AVPlayerView) resolves the crash entirely. The crash is 100% reproducible on every launch. This behavior was not present on macOS 26.3 or any prior release. The app was not recompiled — the same binary that works on 26.3 crashes on 26.4 Beta & 26.4. Environment Detail Value OS macOS 26.4 Hardware MacBook Pro M1 (MacBookPro17,1)
Replies
2
Boosts
0
Views
97
Activity
5d
ApplicationMusicPlayer.shared player.play() permission denied in app sandbox (Tauri)
Hi, I'm developing a Tauri V2 app on MacOS, and am wanting to implement playback controls. It seems that Apple locks down playback, requiring a signed application. My app also has capabilities to "get currently playing track", and I confirmed this works; Apple produces a popup triggered by my await MusicAuthorization.request() call. It returns nil, of course, because I can't get anything to play via the ApplicationMusicPlayer; only through the system's Apple Music app. I understand SystemMusicPlayer is not available on MacOS, which is fine. I'm just a little confused as it seems pretty standard to need to test playback controls quickly without having to codesign and do some provisionprofile embedding acrobatics each time Rust re-compiles target/debug. This slows down development a lot. I do have these entries in my Entitlements.plist: <key>com.apple.security.personal-information.media-library</key> <true/> <key>com.apple.developer.music-kit</key> <true/> <key>com.apple.security.app-sandbox</key> <true/> In my tauri.conf.json, I have: "macOS": { "entitlements": "./Entitlements.plist", "signingIdentity": "Apple Development: ()" } My application works like this: I have a temporary button click to fire off a tauriinvoke() command which goes to a #tauri::command, which bridges to Swift code. Again, I validated that my less-permissive "get currently playing track" works; i.e., does not get permission denied. exact error message: [swift] playMedia error: .permissionDenied (^specifically, ".permissionDenied") My code to trigger playback of a specific media item: Task { print("[swift] entered sema Task") let status: MusicAuthorization.Status = await MusicAuthorization.request() print("auth status: \(status)") guard status == .authorized else { sema.signal(); return } print("passed the status guard.") do { var request = MusicCatalogResourceRequest<Song>(matching: \.id, equalTo: MusicItemID(rawValue: songId)) request.limit = 1 let response = try await request.response() guard let song = response.items.first else { sema.signal(); return } let player = ApplicationMusicPlayer.shared player.queue = [song] try await player.play() success = true } catch { print("[swift] playMedia error: \(error)") } sema.signal()
Replies
3
Boosts
0
Views
424
Activity
6d
How to hide route button `showsRouteButton = false` in `MPVolumeView` without deprecation warning?
MPVolumeView's showsRouteButton was deprecated (https://aninterestingwebsite.com/documentation/mediaplayer/mpvolumeview/showsroutebutton?language=objc). It's not clear how can we now hide this button without deprecation warning. The documentation is lacking. Please advise. Thank you!
Replies
4
Boosts
0
Views
310
Activity
1w
I am sending cdn token and i am gettiing error from apple tv TVOS
The Url that i am using to play content in AVPlayer https://vodc.dp.sooka.my/wmt:eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE3NzIzODUwMzIsImlzcyI6IlZSIiwiZXhwIjoxNzcyNDEzODMyLCJ3bXZlciI6Mywid21pZGZtdCI6ImFzY2lpIiwid21pZHR5cCI6MSwid21rZXl2ZXIiOjMsIndtdG1pZHZlciI6NCwid21pZGxlbiI6NTEyLCJ3bW9waWQiOjMyLCJ3bWlkIjoiMGY1OWU0YjctNTAyMC00ZDU3LWE2ZTktNzJhNzZmY2U3ZTdlIiwiZmlsdGVyIjoiKHR5cGU9PVwidmlkZW9cIiYmRGlzcGxheUhlaWdodDw9MjE2MCl8fCh0eXBlPT1cImF1ZGlvXCImJmZvdXJDQyE9XCJhYy0zXCIpfHwodHlwZSE9XCJ2aWRlb1wiJiZ0eXBlIT1cImF1ZGlvXCIpIiwicGF0dGVybiI6IjJhMmEyZDQyZGY5ZmQ5MGE1MTgzMDllYTE1MTE1YTc2LTczYTU2ZTY2NjY1NTQ5MjgwZTAwLTEwIn0.FSgRrQeFHLhmrBuDFsMKZGFh4eUrCk9PgTxIyFTP8yk/2a2a2d42df9fd90a518309ea15115a76-73a56e66665549280e00-10/2a2a2d42df9fd90a518309ea15115a76-73a56e66665549280e00-10/index.m3u8 I am getting below error : "timestamp":1772385202.085278,"message":"Playback failed. unsupported URL","data":{"message":"unsupported URL","code":-1002,"underlyingError":{"domain":"NSURLErrorDomain","description":"Error Domain=NSURLErrorDomain Code=-1002 \"unsupported URL\" UserInfo={NSLocalizedDescription=unsupported URL, NSUnderlyingError=0x301f08870 {Error Domain=kCFErrorDomainCFNetwork Code=-1002 \"unsupported URL\" Please help me what is the reason of this error
Replies
2
Boosts
0
Views
212
Activity
1w
ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS
Bug Report: ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS Summary When using ScreenCaptureKit to capture system audio for extended periods, the application crashes with EXC_BAD_ACCESS in Swift's error handling runtime. The crash occurs in swift_getErrorValue when trying to process an error from the SCStream delegate method didStopWithError. This appears to be a framework-level issue in ScreenCaptureKit or its underlying ReplayKit implementation. Environment macOS Sonoma 14.6.1 Swift 5.8 ScreenCaptureKit framework Detailed Description Our application captures system audio using ScreenCaptureKit's audio capture capabilities. After successfully capturing for several minutes (typically after 3-4 segments of 60-second recordings), the application crashes with an EXC_BAD_ACCESS error. The crash happens when the Swift runtime attempts to process an error in the SCStreamDelegate.stream(_:didStopWithError:) method. The crash consistently occurs in swift_getErrorValue when attempting to access the class of what appears to be a null object. This suggests that the error being passed from the system framework to our delegate method is malformed or contains invalid memory. Steps to Reproduce Create an SCStream with audio capture enabled Add audio output to the stream Start capture and write audio data to disk Allow the capture to run for several minutes (3-5 minutes typically triggers the issue) The app will crash with EXC_BAD_ACCESS in swift_getErrorValue Code Sample func stream(_ stream: SCStream, didStopWithError error: Error) { print("Stream stopped with error: \(error)") // Crash occurs before this line executes } func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) { guard type == .audio, sampleBuffer.isValid else { return } // Process audio data... } Expected Behavior The error should be properly propagated to the delegate method, allowing for graceful error handling and recovery. Actual Behavior The application crashes with EXC_BAD_ACCESS when the Swift runtime attempts to process the error in swift_getErrorValue. Crash Log Details Thread #35, queue = 'com.apple.NSXPCConnection.m-user.com.apple.replayd', stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x0000000194c3088c libswiftCore.dylib`swift::_swift_getClass(void const*) + 8 frame #1: 0x0000000194c30104 libswiftCore.dylib`swift_getErrorValue + 40 frame #2: 0x00000001057fba30 shadow`NewScreenCaptureService.stream(stream=0x0000600002de6700, error=Swift.Error @ 0x000000016b7b5e30) at NEW+ScreenCaptureService.swift:365:15 frame #3: 0x00000001057fc050 shadow`@objc NewScreenCaptureService.stream(_:didStopWithError:) at <compiler-generated>:0 frame #4: 0x0000000219ec5ca0 ScreenCaptureKit`-[SCStreamManager stream:didStopWithError:] + 456 frame #5: 0x00000001ca68a5cc ReplayKit`-[RPScreenRecorder stream:didStopWithError:] + 84 frame #6: 0x00000001ca696ff8 ReplayKit`-[RPDaemonProxy stream:didStopWithError:] + 224 Printing description of stream._streamQueue: error: ObjectiveC.id:4294967281:18: note: 'id' has been explicitly marked unavailable here public typealias id = AnyObject ^ error: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:1:65: 'id' is unavailable in Swift: 'id' is not available in Swift; use 'Any' Swift._DebuggerSupport.stringForPrintObject(Swift.UnsafePointer<id>(bitPattern: 0x104ae08c0)!.pointee) ^~ ObjectiveC.id:2:18: note: 'id' has been explicitly marked unavailable here public typealias id = AnyObject ^ warning: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:5:7: initialization of variable '$__lldb_error_result' was never used; consider replacing with assignment to '_' or removing it var $__lldb_error_result = __lldb_tmp_error ~~~~^~~~~~~~~~~~~~~~~~~~ _ Before the crash, we observed this error message in the console: [ERROR] *****SCStream*****RemoteAudioQueueOperationHandlerWithError:1015 Error received from the remote queue -16665 Additional Context The issue occurs consistently after approximately 3-4 successful audio segment recordings of 60 seconds each Commenting out custom segment rotation logic does not prevent the crash The crash involves XPC communication with Apple's ReplayKit daemon The error appears to be corrupted or malformed when crossing the XPC boundary Workarounds Attempted Added proper thread safety for all published properties using DispatchQueue.main.async Implemented more robust error handling in the delegate methods None of these approaches prevented the crash since it occurs at the Swift runtime level before our code executes. Impact This issue prevents reliable long-duration audio capture using ScreenCaptureKit. This bug significantly limits the usefulness of ScreenCaptureKit for any application requiring continuous system audio capture for more than a few minutes. Perhaps this issue might be related to a macOS bug where the system dialog indicates that the screen is being shared, even though nothing is actually being shared. Moreover, when attempting to stop sharing, nothing happens.
Replies
3
Boosts
0
Views
813
Activity
1w
Mixing ScreenCaptureKit audio with microphone audio
Hi, I'm new to AVAudioEngine(and macOS programming in general). I'm trying to mix microphone audio with ScreenCaptureKit audio using AVAudioEngine without playing it back. I've created a AVAudioPlayerNode and scheduling buffers in my SCStream handler: playerNode.scheduleBuffer(samples) and have connected the playerNode to the mainMixerNode. audioEngine.connect(audioEngine.inputNode, to: audioEngine.mainMixerNode, format: micFormat) audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: format) The problem is that mainMixerNode plays the audio to the speaker creating a feedback loop. How can I prevent the mixer output from being played back. Also: Is this the best way of mixing microphone input with some other input? I ran into AVAudioEngine's manual rendering mode, which seems like the way to go for mixing audio without playing it back. However, I couldn't figure out how to connect microphone input to the AVAudioEngine in manual rendering mode?
Replies
1
Boosts
0
Views
1.2k
Activity
1w
The audio of FairPlay protected content can be captured - Safari on iOS
Hi, Has anyone been able to protect the audio part of FairPlay protected content from being captured as part of screen recording on Safari/iOS (PWA and/or online web app)? We have tried many things but could not prevent the audio from being recorded. Same app and content on Safari/Mac does not allow audio to be recorded. Any tips?
Replies
1
Boosts
0
Views
187
Activity
1w
SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
So, I've been wondering how fast a an offline STT -> ML Prompt -> TTS roundtrip would be. Interestingly, for many tests, the SpeechTranscriber (STT) takes the bulk of the time, compared to generating a FoundationModel response and creating the Audio using TTS. E.g. InteractionStatistics: - listeningStarted: 21:24:23 4480 2423 - timeTillFirstAboveNoiseFloor: 01.794 - timeTillLastNoiseAboveFloor: 02.383 - timeTillFirstSpeechDetected: 02.399 - timeTillTranscriptFinalized: 04.510 - timeTillFirstMLModelResponse: 04.938 - timeTillMLModelResponse: 05.379 - timeTillTTSStarted: 04.962 - timeTillTTSFinished: 11.016 - speechLength: 06.054 - timeToResponse: 02.578 - transcript: This is a test. - mlModelResponse: Sure! I'm ready to help with your test. What do you need help with? Here, between my audio input ending and the Text-2-Speech starting top play (using AVSpeechUtterance) the total response time was 2.5s. Of that time, it took the SpeechAnalyzer 2.1s to get the transcript finalized, FoundationModel only took 0.4s to respond (and TTS started playing nearly instantly). I'm already using reportingOptions: [.volatileResults, .fastResults] so it's probably as fast as possible right now? I'm just surprised the STT takes so much longer compared to the other parts (all being CoreML based, aren't they?)
Replies
3
Boosts
0
Views
723
Activity
1w