Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

VisionOS Enterprise API Not Working
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps. We did the following: Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://aninterestingwebsite.com/documentation/visionos/accessing-the-main-camera I am just unable to receive camera frames. I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason. "Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
3
0
890
Dec ’25
Auto Rig Pro Mixamo Animation to RC Pro
Hi! I have been struggling with this for a little while and most of what I've found has not helped much. I hope to find more success here. Essentially, I have a model I've made in Blender. I rigged it using Auto Rig Pro, and I've also used ARP to add a Mixamo animation to it. That all works fine in Blender. However, when I try to import this model into RCP, I don't get the animation. The "default subtree animation" is completely empty. I attribute this to my lack of experience in this field but here's what I've attempted thus far: Pushing the Mixamo keyframes into an NLA strip. I'm pretty sure this is the correct line of action, but I'm definitely not doing something right. Baking the animation (?) I've made sure that I have animation checked when I export the model! Any ideas or reference projects would be lovely. I haven't really found much that has pushed me in the right direction. This project is, unfortunately, kind of time sensitive, so I would appreciate help ASAP. Thank you and let me know if I can add anymore context!
3
0
543
Jan ’26
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
3
0
1.1k
Oct ’25
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
3
0
518
1w
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
2
0
370
Aug ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://aninterestingwebsite.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
2
0
821
Mar ’26
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
2
0
895
Sep ’25
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://aninterestingwebsite.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
2
1
1.2k
Mar ’26
How do we use the new Unified Coordinate Conversion features in visionOS 26?
The landing page for visionOS 26 mentions The Unified Coordinate Conversion API makes moving views and entities between scenes straightforward — even between views and ARKit accessory anchors. This WWDC session very briefly shows a single example of using this, but with no context. For example, they discuss a way to tell the distance between a Model3D and an entity in a RealityView. But they don't provide any details for how they are referencing the entity (bolts in the slide). The session used the BOT-anist example project that we saw in visionOS 2, but the version on in the Sample Code library has not been updated with these examples. I was able to put together a simple example where we can get the position of a window relative to the world origin. It even updates when the user recenters. struct Lab080: View { @State private var posX: Float = 0 @State private var posY: Float = 0 @State private var posZ: Float = 0 var body: some View { GeometryReader3D { geometry in VStack { Text("Unified Coordinate Conversion") .font(.largeTitle) .padding(24) VStack { Text("X: \(posX)") Text("Y: \(posY)") Text("Z: \(posZ)") } .font(.title) .padding(24) } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action: { old, new in posX = Float(new.x) posY = Float(new.y) posZ = Float(new.z) } } } } This is all that I've been able to figure out so far. What other features are included in this new Unified Coordinate Conversion? Can we use this to get the position of one window relative to another? Can we use this to get the position of a view in a window relative to an entity in a RealityView, for example in a Volume or Immersive Space? What else can Unified Coordinate Conversion do? Are there documentation pages that I'm missing? I'm not sure what to search for. Are there any Sample projects that use these features? Any additional information would be very helpful.
2
5
1.5k
Sep ’25
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
2
0
766
Sep ’25
Help with visionOS pushWindow issues requested
I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below. Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart. (FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location (FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system (FB21594646) pushWindow interacts poorly with the window bar close app option (FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked (FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes (FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot (FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window (FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close (FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them. Questions: I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds? I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API? I'd appreciate any guidance Apple engineers could provide. Thank you.
2
3
859
1w
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://aninterestingwebsite.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
2
1
802
Nov ’25
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
2
1
471
Sep ’25
Guided Access - Detect when setup (Eyes + Hands) is done
Hello, I am building a kiosk-style app for VisionOS which will be used in Guided Access mode, to be given to various visitors. So each of them will do hands + eyes setup, standard Guided Access thing. I want my experience to auto-start playing content when setup is done. I looked everywhere, but found no way do detect whether setup is complete? Also adding any kind of interface to start the app manually is risky, since buttons etc remain visible an interactable WHILE setup takes place. Delay-based approach also wont work, since setup can be skipped, or failed, or be done quickly, slowly... So it takes between 10 seconds and a few minutes. So the question is - is there any way to get notification, or check some bool or something that will tell me that Hands + Eyes setup in Guided mode is complete (or skipped)? Thanks in advance!
2
1
659
Feb ’26
RemoteImmersiveSpace sample isn't working
Hello RemoteDeviceIdentifier returns nil and therefore it crashes the HoverEffect sample project. I have vision26 beta 2 on both devices what the correct method of running this code sample ?
Replies
3
Boosts
0
Views
175
Activity
Jun ’25
VisionOS Enterprise API Not Working
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps. We did the following: Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://aninterestingwebsite.com/documentation/visionos/accessing-the-main-camera I am just unable to receive camera frames. I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason. "Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
Replies
3
Boosts
0
Views
890
Activity
Dec ’25
Auto Rig Pro Mixamo Animation to RC Pro
Hi! I have been struggling with this for a little while and most of what I've found has not helped much. I hope to find more success here. Essentially, I have a model I've made in Blender. I rigged it using Auto Rig Pro, and I've also used ARP to add a Mixamo animation to it. That all works fine in Blender. However, when I try to import this model into RCP, I don't get the animation. The "default subtree animation" is completely empty. I attribute this to my lack of experience in this field but here's what I've attempted thus far: Pushing the Mixamo keyframes into an NLA strip. I'm pretty sure this is the correct line of action, but I'm definitely not doing something right. Baking the animation (?) I've made sure that I have animation checked when I export the model! Any ideas or reference projects would be lovely. I haven't really found much that has pushed me in the right direction. This project is, unfortunately, kind of time sensitive, so I would appreciate help ASAP. Thank you and let me know if I can add anymore context!
Replies
3
Boosts
0
Views
543
Activity
Jan ’26
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
Replies
3
Boosts
0
Views
1.1k
Activity
Oct ’25
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
3
Boosts
0
Views
518
Activity
1w
Volumetric Maps in VisionOS
At the moment the map kit APls only support non-volumetric maps (i.e. in a window or in a volume, but on a 2D surface). Is support for 3D volumetric maps in VisionOS in the works? And if so when can we expect it to be available?
Replies
2
Boosts
0
Views
83
Activity
Jun ’25
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
Replies
2
Boosts
0
Views
370
Activity
Aug ’25
How to open control center in Vision Pro’s Xcode simulator
I want to open the control center in Vision Pro’s Xcode simulator. Can I open it? If I can, please tell me how to do it. Thank you.
Replies
2
Boosts
0
Views
436
Activity
Aug ’25
What is the environment in the Vision Pro simulator sidebar?
If I long press on an element, the sidebar disappears and then a Done appears on the screen, but nothing else changes, so what are the Environments in Vision Pro's Simulator?
Replies
2
Boosts
0
Views
107
Activity
Aug ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://aninterestingwebsite.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
Replies
2
Boosts
0
Views
821
Activity
Mar ’26
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
Replies
2
Boosts
0
Views
895
Activity
Sep ’25
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://aninterestingwebsite.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
Replies
2
Boosts
1
Views
1.2k
Activity
Mar ’26
How do we use the new Unified Coordinate Conversion features in visionOS 26?
The landing page for visionOS 26 mentions The Unified Coordinate Conversion API makes moving views and entities between scenes straightforward — even between views and ARKit accessory anchors. This WWDC session very briefly shows a single example of using this, but with no context. For example, they discuss a way to tell the distance between a Model3D and an entity in a RealityView. But they don't provide any details for how they are referencing the entity (bolts in the slide). The session used the BOT-anist example project that we saw in visionOS 2, but the version on in the Sample Code library has not been updated with these examples. I was able to put together a simple example where we can get the position of a window relative to the world origin. It even updates when the user recenters. struct Lab080: View { @State private var posX: Float = 0 @State private var posY: Float = 0 @State private var posZ: Float = 0 var body: some View { GeometryReader3D { geometry in VStack { Text("Unified Coordinate Conversion") .font(.largeTitle) .padding(24) VStack { Text("X: \(posX)") Text("Y: \(posY)") Text("Z: \(posZ)") } .font(.title) .padding(24) } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action: { old, new in posX = Float(new.x) posY = Float(new.y) posZ = Float(new.z) } } } } This is all that I've been able to figure out so far. What other features are included in this new Unified Coordinate Conversion? Can we use this to get the position of one window relative to another? Can we use this to get the position of a view in a window relative to an entity in a RealityView, for example in a Volume or Immersive Space? What else can Unified Coordinate Conversion do? Are there documentation pages that I'm missing? I'm not sure what to search for. Are there any Sample projects that use these features? Any additional information would be very helpful.
Replies
2
Boosts
5
Views
1.5k
Activity
Sep ’25
Show text overlay for ImagePresentationComponent
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
Replies
2
Boosts
0
Views
193
Activity
Jun ’25
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
Replies
2
Boosts
0
Views
766
Activity
Sep ’25
Help with visionOS pushWindow issues requested
I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below. Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart. (FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location (FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system (FB21594646) pushWindow interacts poorly with the window bar close app option (FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked (FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes (FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot (FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window (FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close (FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them. Questions: I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds? I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API? I'd appreciate any guidance Apple engineers could provide. Thank you.
Replies
2
Boosts
3
Views
859
Activity
1w
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://aninterestingwebsite.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
Replies
2
Boosts
1
Views
802
Activity
Nov ’25
The folding and unfolding effect of the NBA sand table
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
Replies
2
Boosts
0
Views
134
Activity
May ’25
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
Replies
2
Boosts
1
Views
471
Activity
Sep ’25
Guided Access - Detect when setup (Eyes + Hands) is done
Hello, I am building a kiosk-style app for VisionOS which will be used in Guided Access mode, to be given to various visitors. So each of them will do hands + eyes setup, standard Guided Access thing. I want my experience to auto-start playing content when setup is done. I looked everywhere, but found no way do detect whether setup is complete? Also adding any kind of interface to start the app manually is risky, since buttons etc remain visible an interactable WHILE setup takes place. Delay-based approach also wont work, since setup can be skipped, or failed, or be done quickly, slowly... So it takes between 10 seconds and a few minutes. So the question is - is there any way to get notification, or check some bool or something that will tell me that Hands + Eyes setup in Guided mode is complete (or skipped)? Thanks in advance!
Replies
2
Boosts
1
Views
659
Activity
Feb ’26