<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hi Matthias,<div class=""><br class=""></div><div class=""> The link Elaine mentioned describes how I made the augmented reality video. Briefly I use an Intel RealSense D435 depth sensing camera that captures the video of me and the room and in realtime blend it within ChimeraX with the molecular models using the ChimeraX realsense tool (obtained from ChimeraX menu Tools / More Tools...). So on the desktop display the ChimeraX graphics shows the room video and molecules in realtime, and I just screen capture that using Flashback Pro 5. There is a little more optional hardware -- I locate the RealSense camera in the room using a Vive Tracker mounted on the camera. While in VR I see a rectangular screen where the camera is showing live what the camera sees blended with the models. So as I look at the RealSense camera I see exactly what is being recorded so I can frame the molecules and myself in the video. I do not use the headset cameras for pass-through video.</div><div class=""><br class=""></div><div class=""> The moving of the spike binding domain in the coronavirus video is a morph of PDB 6acg, 6acj, 6ack, three conformations seen by cryoEM.</div><div class=""><br class=""></div><div class=""> Here is another augmented reality video I made on opioids.</div><div class=""><br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="https://youtu.be/FCotNi6213w" class="">https://youtu.be/FCotNi6213w</a></div><div class=""><br class=""></div><div class=""> I think this augmented reality capture can be very useful for presenting results about 3D structures in science publications as supplementary material or for the public. A few people have said they are getting the depth sensing camera to try it, but I don't know of anyone who has done it.</div><div class=""><br class=""></div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span>Tom</div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Mar 12, 2020, at 10:35 AM, Elaine Meng wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi Matthias,<br class="">Tom wrote a nice summary of his process for making mixed-reality videos here:<br class=""><br class=""><<a href="https://www.cgl.ucsf.edu/chimerax/data/mixed-reality-nov2019/mrhowto.html" class="">https://www.cgl.ucsf.edu/chimerax/data/mixed-reality-nov2019/mrhowto.html</a>><br class=""><br class="">That may address the first and third questions. <br class=""><br class="">As for the middle question, there is a mouse mode (or VR hand-controller button mode) for bond rotation. However, my guess is that instead he previously made a morph trajectory between the two conformations and then was using the mouse mode "play coordinates" (flipping through different sets of coordinates in a trajectory model)... Tom would have to confirm whether my guess is correct. One would generally use the bond rotation mode when zoomed in on atoms/bonds shown as sticks, so that is easy to start the drag on a specific bond.<br class=""><br class="">Mouse modes and their toolbar icons:<br class=""><<a href="http://rbvi.ucsf.edu/chimerax/docs/user/tools/mousemodes.html" class="">http://rbvi.ucsf.edu/chimerax/docs/user/tools/mousemodes.html</a>><br class=""><br class="">I hope this helps,<br class="">Elaine<br class="">-----<br class="">Elaine C. Meng, Ph.D.<br class="">UCSF Chimera(X) team<br class="">Department of Pharmaceutical Chemistry<br class="">University of California, San Francisco<br class=""><br class=""><blockquote type="cite" class="">On Mar 12, 2020, at 4:16 AM, Matthias Wolf wrote:<br class=""><br class="">Hi Tom,<br class=""><br class="">I really liked your CoV movie <a href="https://www.youtube.com/watch?v=dKNbRRRFhqY&feature=youtu.be" class="">https://www.youtube.com/watch?v=dKNbRRRFhqY&feature=youtu.be</a><br class="">It’s a new way of storytelling. Although we have used chimeraX in the lab with a Vive and Vive pro for about 2 years, it’s usually one person at a time, with a dark background. But your way opens up interactive VR to a larger audience (even if they don’t get to enjoy the stereoscopic 3D). And it’s cool. <br class=""><br class="">I have some questions:<br class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• How did you overlay the chimera viewport sync’d with the life camera feed showing yourself? Did you use a frame grabber on a different PC to capture the full-screen chimeraX VR viewport while simultaneously recording the camera video stream, e.g. using Adobe Premiere?<br class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• How did you control flipping out the outer spike domain with your hand controller? I guess you assigned control of a torsional angle in the atomic model to a mouse mode?<br class=""><span class="Apple-tab-span" style="white-space:pre"> </span>• Did you enable the headset cameras to orient yourself in the room?<br class=""><br class="">Thanks for keeping improving chimeraX and VR!<br class=""><br class="">Matthias<br class=""></blockquote><br class=""><br class="">_______________________________________________<br class="">ChimeraX-users mailing list<br class=""><a href="mailto:ChimeraX-users@cgl.ucsf.edu" class="">ChimeraX-users@cgl.ucsf.edu</a><br class="">Manage subscription:<br class="">http://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users<br class=""><br class=""></div></div></blockquote></div><br class=""></div></body></html>