Opened 3 years ago

Last modified 3 years ago

#8375 assigned enhancement

Quest 2 VR viewer of ChimeraX GLTF scenes

Reported by: Tom Goddard Owned by: Tom Goddard
Priority: moderate Milestone:
Component: VR Version:
Keywords: Cc:
Blocked By: Blocking:
Notify when closed: Platform: all
Project: ChimeraX

Description

Allow viewing ChimeraX exported scenes in GLTF format on standalone VR headsets such as Quest 2.

This is similar to #8129 which proposes that the standalone headset also send hand-controller interaction back to a running ChimeraX which sends back updated scenes. The current ticket is simpler in that it does not offer VR control of the ChimeraX user interface.

From: Tom Goddard 
Subject: ChimeraX VR on Quest 2 headsets
Date: January 24, 2023 at 6:44:35 PM PST
To: "Cruz, Phil (NIH/NIAID) [C]" , "McCarthy, Meghan (NIH/NIAID) [C]" 
Cc: Chimera Staff 

Hi Phil, Meghan,

  I was dreaming today about where to go with ChimeraX virtual reality.  The Meta Quest 2 standalone headset it said to have 90% of the VR market share.  Here are some thoughts about how we could make ChimeraX VR much more widely used with Quest 2 headsets.

  The basic idea is to have ChimeraX export scenes, probably as GLTF, that the Quest 2 renders standalone.  In other words the GLTF scene is transferred to the Quest 2 wirelessly possibly directly from ChimeraX or through a file sharing service like Google Drive or Dropbox.  Then the Quest 2 does everything to render it in VR.  Here are advantages.

A1) You don't need a VR capable computer.  Probably only 1% of ChimeraX users have such a computer.  You can use a laptop or any computer you usually use for ChimeraX.
A2) Quest 2 is by far the most widely used so more researchers already have it compared to old Windows PC tethered headsets.
A3) A researcher only needs someone's Quest 2 to try it, and it is cheap to purchase if the find it useful ($400-$500).
A4) No wires, no base stations, so easier to use at the researcher's desk.
A5) The VR user interface will be much less complex, since it is not going to control ChimeraX.
A6) Scenes will be copied to the Quest 2 and so ChimeraX is not needed at all to show others science models.
A7) Multi-person sessions are more likely to happen because more researchers have equipment.
A8) ChimeraX use will be normal desktop mode, no switching between VR mode and desktop mode.
A9) It is portable to new physical locations (e.g. between home and work), rather than fixed to a dedicated computer and room.

And here are disadvantages.

D1) The standalone Quest 2 graphics is not as powerful as PC graphics, maybe 5x slower, scenes must have less than 1 million triangles.
D2) Updating a model requires taking off headset to use ChimeraX.  The VR is more like a special display.  Can show/hide parts of models in headset or switch between models previously loaded on headset, so some ability to change what is shown is possible without ChimeraX.
D3) ChimeraX use requires some skill to simplify scenes.  Can make a ChimeraX tool to reduce atom/bond/ribbon/map quality to stay within VR triangle limit.
D4) Requires a new Quest 2 app, probably written in Unity.  Many have tried similar things, e.g. Enduvo, Arkio, AltSpace, ....
D5) Might want some special rendering capabilities on headset, e.g. volumetric rendering or ambient shadows, adds complexity to Quest app.
D6) Dynamics will be inefficient if every time point needs a new scene.
D7) Multi-person meetings would ideally have built-in audio.  This is difficult and might require a commercial solution.  The more cumbersome simultaneous Zoom meeting from a laptop might not be able to use the VR headset microphone and headphones.

  Overall would be much more usable for researchers and could lead to much wider use.  This design largely separates ChimeraX from the VR rendering.  It is easier to learn and setup.  Handling the GLTF in Unity is done very well by glTFast, a free package (https://github.com/atteneder/glTFast) that appears to have excellent support.  It would be necessary to keep the Quest 2 app as simple as possible to be able to implement with our limited resources.

  I haven't put much work into our ChimeraX VR the past year.  Partly that is because I focussed on machine learning structure prediction.  But another reason is that the tethered PC VR technology seems to be in decline, replaced by Quest 2 and similar standalone competitor Pico 4 and more expensive standalone Vive Focus 3 ($1300) and Vive XR Elite ($1100).  The only notable new tethered VR headset in the works seems to be for PlayStation 5 dedicated to video gaming.  So we need to figure out how to utilize the standalone technology.

	Tom

Attachments (1)

quest2_dicom.png (2.3 MB ) - added by Tom Goddard 3 years ago.
Unity Quest 2 standalone app showing 16-plane DICOM of chest CT scan

Change History (14)

comment:1 by Tom Goddard, 3 years ago

From: Eric Pettersen 
Subject: Re: ChimeraX VR on Quest 2 headsets
Date: January 25, 2023 at 11:35:53 AM PST

Unless I'm misunderstanding something, I think another disadvantage is:

D8) The user cannot interactively interrogate the scene, e.g. determine chain IDs, residue numbers, and so forth.

--Eric
From: Tom Goddard
Subject: Re: ChimeraX VR on Quest 2 headsets
Date: January 25, 2023 at 11:50:12 AM PST
To: Eric Pettersen 

Hi Eric,

Yes, anything you would do in VR with current ChimeraX hand-controller modes beyond rotation, zooming, moving and show/hide of parts of scene would not be available without going back to ChimeraX to make a new scene.  But possibly the original scene could contain labels for all residues and just be hidden in VR unless you need to see residue numbers.  GLTF has a tree hierarchy of objects and one node could be the labels, and the VR could show a table of all the objects that can be clicked to show and hide them.

	Tom

comment:2 by Tom Goddard, 3 years ago

I made a Unity app that loads GLTF files and lets you move them, to experiment with the utility of standalone Quest 2 VR. Mike Schmid wants to use the Quest 2 to show tomography data at the West Coast Structural Biology workshop at Asilomar, March 19-22, so I will try to do that. Testing the Unity app revealed many limitations in our ChimeraX GLTF export (transparency #8507, volume images #8511, text labels #8511, mesh lines #8506, lighting #8502) that I have fixed since that will help the NIH 3D pipeline produce diverse models in GLTF format.

My Unity app does very little, just reads gltf files using the open source GLTFast Unity library and lets you move and scale them with hand controllers, using OpenXR.

GLTFast has a few ugly problems. First it wants the built app to include all the compiled shaders and there are many variants (currently using 27). But the needed shader variants depend on what is in the gltf file which I don't know in advance. Compiling all variants takes a long time (tens of minutes to hours) and makes the app big. The shaders are trying to make the lighting exactly match the complex GLTF physically based rendering (PBR) light model (metallic surfaces, ...). It would be better for our use if it simply used standard Unity material shaders even if they can't match the gltf lighting model perfectly. ChimeraX does not match the gltf lighting model. Another downside is the GLTFast shaders simply don't work in some cases, rendering incorrectly. Vertex colored transparent surfaces appear mostly opaque. Any metallicFactor > 0 fails to render light sources when the graphics quality settings don't allow per-pixel lighting for all of them. It looks like the GLTFast developer is over their head trying to make the shaders cover all the cases reliably. I will look if GLTFast has an option to use Unity standard shaders and if not look at other Unity GLTF libraries.

by Tom Goddard, 3 years ago

Attachment: quest2_dicom.png added

Unity Quest 2 standalone app showing 16-plane DICOM of chest CT scan

comment:3 by Tom Goddard, 3 years ago

I made a VR viewer of GLTF files in Unity called Lookie and have had Mike Schmid testing various versions of it for the past 2 weeks in preparation for showing it at a poster session of the West Coast Structural Biology workshop at Asilomar on March 20, 2023. I plan helping at that poster session.

comment:4 by Tom Goddard, 3 years ago

I enhanced the Lookie app to show pass-through video calling the new app LookieAR. This required significant changes because the pass-through API is part of the Oculus OVRCameraRig which uses the Meta Oculus XR plugin while the Lookie app was using the generic Unity XRRig and the Unity OpenXR plugin. I added a user interface that lets you turn on and off pass-through.

The pass-through is pretty nice on the Quest Pro where it shows the room in color. It also works pretty well on Quest 2 with black and white pass through. Both have rather low resolution so you cannot read text in the pass-through video.

Unfortunately Mike Schmid found that the LookieAR app could not render nearly as many triangles. I tested how many triangles could be rendered smoothly with EMDB 12873 written as GLTF at various threshold levels. The Lookie app handled about 1 million before starting to flicker, while LookieAR only handle 300,000 triangles. If the 300K is exceeded in LookieAR (e.g. 500K triangles) then the stutter makes you lose control of the Quest device, with updates only once per second and tracking lagging by 30 seconds, so you cannot even quit the app. The Lookie app degrades in performance much more gradually and I never was unable to quit Lookie.

Tests showed that even with no pass-through in LookieAR (turned off, or with components not even included) the same problems persist. Turning of MSAA (multi-sampling) allowed rendering about 400K triangles before the horrible performance. If I use the Unity XRRig but with the Oculus XR plugin the same bad performance is observed. If I reverted to the Unity OpenXR plugin in LookieAR, the good 1 million triangle performance was observed. I experimented for about half a day and was not able to find a way to make the Oculus XR plugin render larger models nor avoid catastrophic stutter/lag.

So currently LookieAR use has to be done carefully to not show too many triangles.

comment:5 by Tom Goddard, 3 years ago

It is some trouble to maintain both Lookie and LookieAR versions. There are several differences between them.

1) Unity OpenXR vs Oculus XR plugin.
2) Hand controller position and rotation differs. Have to adjust LeftWand and RightWand transforms. For Lookie position 0,-.05,.05 rotation 120,0,0. For LookieAR position 0,.05,.05 rotation 60,0,0.
3) Need to add z -0.8 offset to wand tracking to align with Oculus OVRCameraRig -0.8 offset so wands align with hands in physical space.
4) The Oculus plugin wants a variety of Unity rendering settings changed. It describes each of these under Project Settings (under Edit menu), Oculus, and has Fix buttons to make each change.
5) The headset position game object differs and is used by the ModelUI script to position the UI panel. The Wand component has to have the eye_center game object set to Main Camera for Lookie, or to CenterEyeAnchor for Oculus.
6) The show room checkbutton in the UI needs to be deactivated for Lookie.
7) The name of the app needs to be changed Lookie vs LookieAR to allow both to be installed on the Quest. This is set in Project Settings / Player.
8) Enable XRRig / Main Camera for Lookie or OVRCameraRig for LookieAR.

There are probably more differences.

Not sure how to keep the code of both up-to-date when most all of these differences are not in the code, but instead are settings in the Unity editor.

Last edited 3 years ago by Tom Goddard (previous) (diff)

comment:6 by Tom Goddard, 3 years ago

I renamed Lookie to LookSee and describe how to install and use it on this web page

https://www.rbvi.ucsf.edu/chimerax/data/looksee-mar2023/looksee.html

Last edited 3 years ago by Tom Goddard (previous) (diff)

comment:7 by Tom Goddard, 3 years ago

Slow Oculus XR rendering

To try to figure out why the LookSeeAR app using Oculus XR can only handle 300K triangles while the LookSee app using Unity XR can handle 900K triangles without flickering I made two Unity test programs !QuestOVRSpeedTest and !QuestXRSpeedTest. These just show a OBJ single model using either an OVRCameraRig or XRRig.

I found the same 3x smaller triangle limit for Oculus XR. But if I changed the single setting (Edit / Project Settings / XR Plugin Management / Oculus) Stereo Rendering Mode from MultiView to MultiPass then I get the same triangle limit (~1.5 million) as with the Unity XR test program. All other Oculus XR settings are the defaults, and Vulkan is being used. It looks as if Vulkan MultiView has performance problems in Oculus XR. There are Unity tickets about this from years ago that were supposedly fixed. The Unity XR test app is also using Vulkan (the default) but does not have a multiview setting (at least not that I found). MultiView means that rendering both eye views is done in a single pass with two camera positions, while MultiPass does two passes rendering the scene.

Surprisingly when I changed from MultiView to MultiPass in LookSeeAR it did not fix the problem, still only 300K triangles is smooth. But if I changed from Vulkan rendering to OpenGL ES 3.2 (in Project Settings / Player / Graphics APIs, have Auto Graphics API off, and put OpenGL ES at top of list above Vulkan) then it handled about 600K triangles with MultiView or MultiPass giving the same result. Still slower than Unity XR. That could be related to the GLTFast shaders. So I tried a static OBJ model (EMDB 12873 at different thresholds) with LookSeeAR and 900K triangles completely smooth and 1200K smooth with pass-through off and moderate flicker with pass-through on. Pass-through does reduce the triangle limit with GLTFast models too, 700K GLTF triangles was smooth with no pass-through and moderate flicker with pass-through. While 600K GLTF triangles with pass-through is smooth with OpenGL ES and MultiPass.

It is not clear why the test program worked well with Vulkan MultiPass and OBJ model while LookSeeAR with MultiPass and OBJ model did not. My only idea is that the test program did setup pass-through while LookSeeAR does. I could add pass-through to the test program.

The Oculus XR test program hand OVRManager / Use recommended MSAA off, while LookSeeAR had it on. Turning it on in the test program with vulkan multipass makes 1.5M triangle OBJ stutter badly. Turning MSAA off in LookSeeAR made 900K triangle OBJ smooth with pass through and vulkan multipass while it was unusable with MSAA on. LookSeeAR with MSAA off handled 1.2M triangle OBJ with only glitch per few seconds with pass-through, and smoothly with pass-through turned off. Handled 1.5M OBJ with uniform flicker (drop every other frame) but no loss of tracking with pass through, and with glitches every few seconds with pass-through off. It also handled 700K GLTF with infrequent glitches with pass-through and smoothly without pass through. Handled 900K GLTF without pass-through with glitch per few seconds.

Trying LookSeeAR with multisampling off with OpenGL ES 3.2 multipass gave same performance as vulkan multipass. With opengl es 3.2 multiview it seemed slightly more glitchy at 700K triangle gltf.

It looks like vulkan multiview with msaa off is the best choice for LookSeeAR. I did not notice more aliasing with msaa off looking at emdb 12783 but I did not attempt a detailed comparison. It appears with this rendering mode it can handle 700K triangle gltf or 1.2M triangle obj with infrequent glitches. So the standard unity shaders used with obj can handle about 70% more triangles than the gltf custom shaders. Appears that pass-through reduces maximum triangles by about 25%.

The LookSee app with Unity XRRig has about once per second glitches with 900K gltf. It has multisampling disabled (in Project Settings / Quality / Antialiasing). It is using vulkan multiview. So it looks like LookSeeAR now performs as well as LookSee and I can simplify to providing a single app.

Last edited 3 years ago by Tom Goddard (previous) (diff)

comment:8 by Tom Goddard, 3 years ago

Slow GLTFast Shaders

Comparing maximum triangles for smooth rendering with static OBJ models and GLTFast models showed about 70% more triangles can be handled with OBJ. I believe the difference is because GLTFast uses its own custom shaders to handle the GLTF lighting model (physically based rendering (PBR) roughness and metalicity) while with OBJ the default Unity shaders are used.

It would be quite helpful to have the higher triangle limits by using the Unity default shaders. I looked at three Unity GLTF libraries: GLTFast, GLTFUtility, UnityGLTF and it seems all of them use their own custom shaders to match the GLTF lighting model. That makes sense since visually correct appearance is probably very important for gaming applications. Since the molecular models coming from ChimeraX don't use the GLTF lighting model there is little benefit of using it and standard Unity material shaders would be fine.

Maybe it is possible to take the GLTFast models and simply change the Material used to render the mesh from the custom GLTFast material to the Unity standard. The Material includes the shader. I would need to see if GLTFast also uses a custom Mesh or the Unity standard Mesh. Can easily investigate that using Windows platform instead of Android using the Unity Editor to inspect the programmatically created models.

If GLTFast of another GLTF reader cannot easily be adapted to use standard Unity shaders I could possibly port the Python GLTF reading code in ChimeraX to C#, about 400 lines.

comment:9 by Tom Goddard, 3 years ago

Single LookSee app

I reduced the two LookSee apps (with and without pass-through) to a single app since the Oculus XR version with pass-through now has the same speed as the Unity XR version using Vulkan multi-pass with MSAA off. The new app is called LookSee. I updated the web page describing the app and the Send to Quest ChimeraX tool for this single app.

https://www.rbvi.ucsf.edu/chimerax/data/looksee-mar2023/looksee.html

I also added a splash screen during LookSee startup that shows a molecule and the app name. Also I removed the .glb file suffixes from the Quest LookSee user interface panel.

Last edited 3 years ago by Tom Goddard (previous) (diff)

comment:10 by Tom Goddard, 3 years ago

Black surfaces at small scale factors

Mike Schmid pointed out that his tomograms look black in LookSee unless scaled to large size in VR. I reproduced this. Apparently the 1/30000 scale factor messes up vertex color rendering, perhaps because color interpolation across triangles runs into limited numeric precision. Fixed, ticket #8799.

comment:11 by Tom Goddard, 3 years ago

Added socket code to Send to Quest tool and LookSee app so GLTF files can be sent without using adb. Ticket #8509

comment:12 by Tom Goddard, 3 years ago

SideQuest install of apk fails: signatures don't match

Mike Schmid and I found that installing LookSee version 3 with SideQuest failed with a "INSTALL_FAILED_UPDATE_INCOMPATIBLE: Package com.UCSF.LookSee signatures do not match previously installed version; ignoring!]". To work around this we uninstalled LookSee on the headset before installing the new version. I'm not sure what signature does not match. The previous LookSee apk files were built on Windows and the version 3 apk was built on Mac. Not sure if that is related. Will require more investigation online.

Last edited 3 years ago by Tom Goddard (previous) (diff)

comment:13 by Tom Goddard, 3 years ago

Uninstall of LookSee deletes GLTF file directory

Uninstalling the LookSee app using the Quest headset application pane for Unknown sources and clicking "..." next to LookSee and Uninstall from the menu that appears uninstall the app and removes the GLTF directory on the Quest including any .glb files it contained. Installing looksee_v3.apk with SideQuest does not create the com.UCSF.LookSee directory on the Quest. It gets created when the application is first run.

Last edited 3 years ago by Tom Goddard (previous) (diff)
Note: See TracTickets for help on using tickets.