Opened 8 months ago
Last modified 8 months ago
#16864 assigned enhancement
Improve usability of Sony Spatial Reality 3D display with OpenXR
Reported by: | Owned by: | Tom Goddard | |
---|---|---|---|
Priority: | moderate | Milestone: | |
Component: | VR | Version: | |
Keywords: | Cc: | ||
Blocked By: | Blocking: | ||
Notify when closed: | Platform: | all | |
Project: | ChimeraX |
Description
Greg Pintilie has two glasses-free 3D flat-panel Sony Spatial Reality displays (27" and 16") that work with ChimeraX OpenXR (command "xr on").
While the Sony OpenXR is working with ChimeraX Greg Pintilie has found the following problems that I will work on fixing using his display:
1) eye convergence seems off, it takes a few seconds for the 3D to be seen. I am guessing this is because with VR the eye views are parallel and they are probably not parallel with these 3D displays.
2) ChimeraX mouse rotation does not rotate about the correct center of the molecule. Since mouse rotation isn't used in VR this probably has never been tested before. Similarly mouse zoom sometimes translates horizontally or vertically in the 3d display.
3) You can't see the mouse position on the 3D display which makes rotating difficult. There is a ChimeraX command "cofr showPivot" that shows the center of rotation point but you need to be able to see where the mouse is to control the rotation direction so I may need to add some display of the mouse, not sure how since the mouse is in 2D and has no depth in the 3D scene.
4) The molecule jitters sometimes, probably due to jitter in the Sony eye tracking. This may need some smoothing. Smoothing the view direction is not done in VR as it would instantly make the viewer nauseous, but it may be needed on the Sony display. I'm surprised if jitter is a problem with the eye tracking that the display OpenXR driver does not do the smoothing itself.
5) When switching to ChimeraX OpenXR mode it does not preserve the same view direction that was seen on the 2D screen. In VR ChimeraX does not control the view direction, the VR headset orientation controls the view direction. This is somewhat different with a 3D display where the user is usually looking straight at the display so it makes sense that you would want to match the 2D and 3D display view directions.
Change History (9)
comment:1 by , 8 months ago
comment:2 by , 8 months ago
Greg's initial impression of ChimeraX OpenXR with the Sony display:
From: Greg Pintilie
Subject: Re: [chimerax-users] Regarding Chimerax and Acer SpatialLabs 3D display/Acer OpenXR runtime
Date: February 11, 2025 at 3:18:03 PM PST
To: Tom Goddard
Tom,
Thanks for that detailed message... it works!!!
So yes, the Sony SR display comes with a utility that install drivers, must include the XR drivers.
A few initial reactions:
- once the display is shown on the 3D display, when glancing over, it takes a few seconds for the eyes to adjust and 'see' the 3D image, I get a bit of a crossed eye feel or like having to focus on the right distance until the scene become 3D. I know with the other stereoscopic modes, there were some options to change eye separation and eye to screen distance which could make the 3D effect more comfortable; I don't know if these would work for this XR mode. I tried changing the eyeSeparation but that didn't seem to have any effect. But I don't think this will be needed, the 3D effect seems just right, i.e. not to deep, not too flat. It was more to see if swtiching to and from would become more comfortable/quicker.
- it is very much like the 3D displays I've used before, including the nvidia 3D active glasses which are no longer supported. Recently, the only other 3D system I could use with ChimeraX has been a 3D projector with active glasses, putting chimera into 3D SBS mode, and letting the projector show each side alternatively while synchronizing with the active glasses - works pretty well!
- I feel like these 3D displays are extremely useful for seeing complex molecular structures. Just now I started looking at a RNA molecule along with water and ions throughout. I also showed the density map as a transparent surface. I is incredible how much easier it is to 'see' the structure, interactions, and density map, compared to looking at a 2D display. I also usually get a bit of a goosebump feeling when the structure really pops up into full 3D, just like it's there for real in front of you, and with this XR mode it was no exception.
- it is also not much different than in VR I think, i.e. everything is just as clear and as 3D as with VR. With VR it does feel a bit more immersive, but as you said it is just a pain to get it on and off
- compared to the 3D projector setup, it's also very similar. With the projector setup, one advantage is that you can see the mouse on the 3D display, so I don't have to look back to the 2D screen to see where the mouse is, for things like rotating, selecting, etc. Because there is a bit of that few seconds for the eyes to adjust to the lenticular/XR display, it feels a bit more cumbersome - I wonder if it would be easy to show some indication of where the mouse is on the XR display
- also, with the XR display right now, it looks like I can't change the center of rotation; i.e. with the right mouse mode 'pivot', the pivot doesn't actually change; this is very useful when zoomed in to a local region and rotating around a point in that region to see it better. Going out of XR mode, the pivot seemed to start working again.
- since XR works, I think I will be switching to ChimeraX for these displays - for demos I will play with scripting animations to do rotations, zoom-ins, etc. Very excited for it, because moving stuff to Unreal Engine as I mentioned was a pain, and it's only limited to surfaces; I was just starting to see if I can export molecular models as surfaces, e.g. in glb files (obj files did not retain color), but I may not need to do this now :)
- I would love for you to see this. I was thinking I could either come over with one of these displays (we have 2), and could even leave one there for a few days so you can play with it more and refine the XR mode. I was actually thinking to ask if I could come by sometime anyway, it would be super cool to visit your lab at UCSF
- if you think it would be easier for you to come here instead, that would work too.
- what would you think about potentially thursday or friday this week?
I am here this week, and then I am going to the BPS conference in LA from Saturday until Wednesday. I am planning to take one of these SR displays with me to show some structures, so the timing for this is impeccable. I was probably going to struggle with Unreal Engine for the next few days but I don't think I will actually have to!
I still spend a lot of time remotely, so if not this week Thursday or Friday, next week may not work too well, since I will probably head out on Thursday this time. But I could potentially drop of the display to you on Thursday morning on my way to the airport :) I come here every month or so for a week, so it won't be long until I'm back.
Greg
comment:3 by , 8 months ago
These same issues probably effect the Acer SpatialLabs 3D display as described in ticket #16865.
comment:4 by , 8 months ago
Notify when closed: | → gregdp@gmail.com |
---|
comment:5 by , 8 months ago
Notify when closed: | gregdp@gmail.com |
---|---|
Reporter: | changed from | to
I made several improvements to the ChimeraX OpenXR code (command "xr on") to improve the usability of the Sony Spatial Reality display (model SR1, 15.6" screen). There were a lot of behaviors desirable for a flat panel 3D display that don't make sense for VR headsets, so it took about 5 days to make the following improvements.
I1) I made "xr on" preserve the current desktop window view in 3d. Before it would switch it to align scene and room xyz coordinates (the initial orientation of models). Preserving the orientation helps switch from 2D to 3D without having to reestablish view point. It places the center of rotation point at a depth equal to the Sony screen position.
While OpenXR is enabled the desktop window mirrors the right eye view from the Sony which can exhibit some stretching when the user is not looking at the Sony face on at the middle of the display. When viewed from off-center the rendering is done looking perpendicular to the screen but capturing only an off-center rectangle (ie oblique rendering).
When turning OpenXR of ("xr off") ChimeraX switches to the normal monoscopic perspective camera and attempts to preserve the view orientation that was showing in 3D on the Sony display. (This is not done for VR headsets.)
I2) I made an optional toolbar icon "3D" that can be added to the Home toolbar using the Settings... entry of the popup menu on the toolbar. It turns xr on or off. This is a convenience for switching in and out of 3d mode without having to type commands.
I3) I made mouse zooming use as the zoom direction the line between the viewer's head and center of the Sony screen. That is different then the Sony eye camera direction which as noted above is always perpendicular to the screen. Zooming in the eye camera direction was causing the scene to appear to fly off horizontally.
I4) I made mouse zoom allow both scaling the scene while keeping the same depth relative to the screen, or moving the scene in or out of the screen. The normal zoom mouse mode does scaling and holding the Alt key while zooming does moving. This is how mouse zooming works with other ChimeraX stereoscopic camera modes.
I5) I made xr on slow down the mouse zoom speed (command mousemode setting zoom speed 0.2) since at the normal speed the mouse wheel makes bigger jumps than desired especially when moving the scene forward or backward relative to the screen by holding down the Alt key. The speed reverts to normal (1) when xr off is run.
I6) I made mouse selection work in the ChimeraX graphics window when in OpenXR mode. This required implementing a ray calculation routine to convert the window pixel position to a ray in the 3d scene. This was not done before because it is difficult to use the mouse in combination with a VR headset because taking off the VR headset leaves the models out of view.
I7) I added a maxQuality option to the xr command. The purpose was to allow the Sony display to render full ambient shadows and high quality atom spheres and ribbons. Previously the xr command would by switch to simple lighting and restrict the number of shadow directions to 8 if the user reenables shadows, and reduce the number of triangles for atoms and ribbons to 1 million. If maxQuality is true it does not change or restrict lighting or graphics quality. maxQuality can be true, false or "auto". The default is "auto" which means true for the Sony display and false for VR headsets. The rationale is that the Sony display is not immersive so reduced frame rate is much less likely to cause nausea.
I8) I made "xr on" report the size in pixels of the rendered eye images. I was somewhat surprised that the size was 3840 x 2160 for the Sony, the same as its backing flat-panel. Since the display only uses some of the pixels for each eye I thought it might have the eye images rendered at lower resolution but it does not.
Problems
P1) I could not find any way in OpenXR to determine the screen size (15.6" for the SR1). That is needed to position the scene correctly since the OpenXR uses screen space coordinates in meters. The OpenXR system name for the SR1 is "SonySRD System" and if the 27" Sony SR2 uses the same name then the current ChimeraX OpenXR will think it is 15.6". The OpenXR xrGetReferenceSpaceBoundsRect() call returned no bounds available for all coordinate spaces. Probably the SR1 and SR2 have different vendorId which could be used to distinguish the displays and hard-code different screen sizes.
P2) Mouse rotation, translation, zooming and selection and all other mouse modes need to be done in the ChimeraX graphics window. Moving the mouse to the Sony display (it acts as an extended display) will not do anything when clicking. I was surprised that the Sony appears as a display to Windows. This is how the very early VR headset developer kits worked and is a bad design since the display is not really usable for 2D. The display is 4K and needs to be set to 100% scaling which makes any application window on that display unusably tiny. It is probably possible but difficult to make the mouse work in the Sony display. I don't plan on trying. The Sony OpenXR implementation appears to create full-screen window on the Sony display, owned by the ChimeraX process by ChimeraX has no idea it exists, so even finding it might via the Qt window toolkit might prove impossible.
P3) Greg Pintilie and I discussed the idea of maybe having the mouse in the ChimeraX window tracked by a 3D mouse model that could be seen on the Sony. That might be useful for pointing or selecting, although it is problematic since the mouse pointer is in 2D so it may appear behind scene objects. Also the pointer would only align with one of the user's eyes. I haven't tried implementing this. Not being able to do anything besides rotating, translating, zooming and looking before returning to the 2D ChimeraX display limits the usefulness of the Sony display.
P4) Greg mentioned molecule jitter, likely because he eye tracking is jittering. I only rarely noticed this. Instead what I noticed all the time was eye tracking lag, as I move my head I see double images until I stop, or I must move my head very slowly (1 cm/sec). I suspect this is because the eye tracking is being heavily smoothed to avoid jitter and the smoothing introduces latency. The Spatial Reality Settings application supposedly has options to trade off speed and quality according to web documentation, but those options do not appear with the SR1 display connected. I think they are SR2 only.
P5) The eye tracking completely fails if I turn of the lights. My reason for turning off the lights was because the screen is tilted 45 degrees so it perfectly reflects the ceiling lighting into the user's eyes. The screen has anti-reflective coatings to reduce this and is bright so it does not destroy the view, but it reduces its usefulness.
P6) Greg noted a bit of difficulty seeing the 3D, taking a few seconds to converge. I did not have much trouble with this with the scene being automatically centered at the screen center. I think the convergence problems are from too much depth, either the model is far behind or far in front of the display. I'm a bit surprised that I have not seen a display setting for inter-pupilary distance. The display definitely would need to adjust rendering for users with different left eye to right eye separation. I don't think the eye tracking camera can determine the eye spacing. I should check what OpenXR reports as the eye spacing.
P7) Another drawback is that the display uses up in xr_runtime_server 80% of my total CPU (i7-12700K, 8 performance cores, 4 efficiency cores, about a factor of 2x faster than the minimum recommended i5-9600). This causes the fans to rev to highest speed, very loud, after about a minute of use. GPU use (Nvidia RTX 3070) is at 16% with no ChimeraX models open. So apparently the Sony OpenXR driver is either doing eye tracking or image composition on many CPU cores instead of the GPU which seems like a horrible implementation. When ChimeraX is showing a medium size protein (PDB 8xps) the load changes negligibly with GPU use rising to 20%. ChimeraX CPU use is at about 7%.
P8) Another drawback is that the Sony OpenXR driver appears to only be available for Windows. I used Windows 11. But most potential users at UCSF would want to use it with Mac laptops.
P9) Another drawback is that it only works for one viewer since the eye tracking tracks a single user's eyes. This mostly eliminates the possibility of using the display in a discussion between 2 people. Supposedly the SR2 model has a button that can be configured to toggle between tracking 2 viewers, but still only one can see the 3D at a time making it of limited use.
P10) Another limitation is that the perceived resolution is only about HD (1920 x 1080) in my test where I made many small atom labels and compared to a full-screen image in the ChimeraX window on an HD display. The Sony was a little worse than the HD display. The 3D view also as quite noticeable screen door cross-hatchings on all bright objects. They give a nice texture so were not distracting.
comment:6 by , 8 months ago
The improvements listed in the preceding comment are in the March 5, 2025 ChimeraX daily build.
Next I plan to make a web describing the strengths and weaknesses of the Sony display. And next week I expect I'll get a very similar Acer SpatialLabs 3D display and I'll investigate why that display is not working with ChimeraX OpenXR. I expect the Acer display will behave similar to the Sony with the main difference possibly being price, $4750 for the 27" Sony vs $3000 for the 27" Acer.
comment:7 by , 8 months ago
The Sony OpenXR seems defective in how it handles the display sleeping or being turned off. If the display is sleeping after using ChimeraX OpenXR there appears to be no way to recover -- the xrBeginFrame() call always fails even after the display wakes. Turning OpenXR off and on (xr off ; xr on) gets it working again. If you try to start OpenXR when the Sony display is powered off the behavior is worse. It allows the OpenXR connection but then won't allow drawing failing in xrBeginFrame() and then turning OpenXR off gives an error about not being able to close the OpenXR session. The only fix for that is restart ChimeraX. I added an error message to explain this bad behavior as detailed in bug #17022.
comment:8 by , 8 months ago
I made a web page showing examples and discussing limitations of the Sony display
https://www.rbvi.ucsf.edu/chimerax/data/sony3d-mar2025/sony3d.html
comment:9 by , 8 months ago
It could be relatively easy to mirror the mouse pointer position from the ChimeraX graphics pane to the Sony 3D display. The right eye rendering could simply draw an indicator such as a green square at the texture position corresponding to the current mouse position at every frame. By drawing it only on the right eye I think the illusion would be that the green square is painted on whatever object it sits over. One drawback is that the green square would also show in the ChimeraX panel since it mirrors the right eye Sony image. Being able to see the mouse position in the 3D view would allow selecting objects. But then doing actions on those selections, such as pressing Toolbar buttons, still would require looking back to the 2D display to position the mouse. But typing a command (e.g. color sel orange) could be done without looking at the 2D display if you trust your typing and don't need to see what was typed.
Greg says he can loan me one of his Sony displays for a month. They are usually at SLAC cryoEM facility since they belong to Wah Chiu's lab. Greg is at SLAC about 1 week per month. He may be able to drop the display Thursday Feb 20, or I could drive to SLAC and get the display.