[chimerax-users] VR Features
goddard at sonic.net
Fri Jul 21 14:06:29 PDT 2017
Good to hear the ChimeraX VR is working for you. A couple weeks ago I added the ability to put text labels on atoms, residues and bonds using the new "label" command. I have not tried it with VR and it will quite possibly not work correctly because the labels are oriented to face the camera and with VR there are two cameras. But I will try to test it and if needed fix it next week. It would not be hard to add an icon pressed with the HTC Vive hand controller to put atom and residue names on atoms and residues you point at. If you wanted custom text in the label then the question is how will the VR user type in text.
We are going to try two VR headsets viewing the same scene where you can see the other person's hand controllers and head, so you can point at things and have a conversation, even a remote one between different universities. I've ordered an Oculus Rift just this week (has not shipped yet) so we have a second headset to develop this, but as soon as I get something working it would be great to try a two person connection between UCSF and NIH. I looked into driving two headsets with one computer a while back and I don't think SteamVR, the programming API we use, allows that. Also it would slow the graphics rendering by 2x, and the two viewers would be in the same physical space and might collide with each other. So the plan is that the two headsets will be driven by two different computers each running their own copy of ChimeraX, and I will add code to pass synchronize information between them to show the positions of the other person's hands and head.
I'm a bit puzzled by the other applications you mention for "extra cameras" rendered by ChimeraX. Is the green screen idea that you have a real video camera on the person using VR and then you want to blend that view of the person's body with the molecules they are seeing in VR? I've looked into that and tried it once. For that you would want ChimeraX to render a video stream of the molecules as seen from the physical camera position. The problem is that is not sufficient, because to blend the molecules with the person's body you need to know the depth of each -- are the molecules in front or behind parts of the person's body? Cameras that provide a depth field are available and ChimeraX can stream the depth field of the molecules, but this would all be a lot of work put together. I don't consider this one too high a priority because it is mostly useful for showing people who aren't using the VR what the person using it is seeing -- nice for demos, but not needed in actual use of the VR for research.
I'm not quite sure how extra ChimeraX camera help you with "mixed reality". For that you need cameras on the headset to blend ChimeraX rendering with the view of real objects. The HTC Vive has a single front-facing camera on the headset -- I've never tried using it and never seen it used. As mentioned above, you can't do correct blending of virtual and real scenes without knowing the depth of the physical objects seen by the camera. The vive has no depth sensor. Mixed reality headsets like the Meta2 have front facing depth sensors to locate physical objects in front of you so virtual rendering can be overlaid on top of those objects at the correct depth.
> On Jul 21, 2017, at 9:20 AM, Fortney, Christopher (NIH/OD/ORS) [C] wrote:
> Hello Chimera X Team,
> My name is Chris Fortney – I run the VR Program at the NIH Library. We’ve been using ChimeraX for VR demonstrations here for some time now, ever since James Tyrwhitt-Drake (3D Print Exchange, NIAID) and I first installed it here earlier this year. The word has spread at NIH – people like ChimeraX for VR! I was hoping I could just email you to introduce myself, and to give feedback on the main features that have been requested. The first being the ability to make selections and then view/add labels while in VR. The second being the ability to add additional cameras to the scene. This would be especially useful for mixed reality. We have a green screen here, and would love the ability to drop a 3rd camera into the scene for that. Being able to add additional cameras could be very useful to run multiple headsets in the same scene, so that two scientists could work on the same structure at the same time.
> Looking forward to hearing back, keep up the excellent work! It definitely has not gone unnoticed over here.
> Chris Fortney
> ChimeraX-users mailing list
> ChimeraX-users at cgl.ucsf.edu <mailto:ChimeraX-users at cgl.ucsf.edu>
> Manage subscription:
> http://plato.cgl.ucsf.edu/mailman/listinfo/chimerax-users <http://plato.cgl.ucsf.edu/mailman/listinfo/chimerax-users>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ChimeraX-users