Version 10 (modified by 18 months ago) ( diff ) | ,
---|
NIAID-UCSF 2024 Contract Statement of Work -- March 1 - September 30, 2024
This SOW is split into two parts. Part 1 is intended to be accomplished in the first 6 months of the contract. If specific items are completed, items from part 2 will be brought forward. Should NIAID extend the contract to 1 year, those items in part 2 not completed during the first 6 months will be worked on.
Part 1
- General ChimeraX improvements to support NIAID-specific requirements to assist NIAID personnel to transition away from the unsupported legacy Chimera program, e.g.:
- Investigate and improve ChimeraX usability for very wide displays and touch screens i.Specific focus on the BioViz lab wall display
- Worms depiction
- More GUIs (notably, copy/combine, 2D labels)
- Support for showing the thermal ellipsoids
- Support the NIH 3D pipeline development, including any changes to ChimeraX to support ongoing development
- Continuing support for NIH3D as needed
- Updating workflows
- Quick submits for AlphaFold database entries
- Improve GLTF output to include structure hierarchy
- Investigate adding support for ChimeraX sessions in NIH3D
- Both uploading and downloading
- Need to check on any possible security issues
- Continuing support for NIH3D as needed
- Extend virtual reality support
- Implement use of pass-through video with the Quest 2/3/Pro for multi-person sessions in ChimeraX VR
- Improve molecular viewer for standalone headsets such as Quest 2/3.
- Add “disable/enable buttons” commands to support better support for handing off the controls to another user to prevent the scene from getting inadvertently changed.
- Explore pedagogical benefits of ChimeraX in VR vs. flat screen
- Provide support to the University of Indiana (Andi), UCSF, and NIAID to conduct a task analysis comparing VR vs. flat screen for understanding biological macromolecules as needed
- Medical Images
- Improve presets for medical images
- Move medical imaging functionality to toolshed
- Explore re-skinning the UI when switching to medical imaging
- Support for automated segmentation
- Investigate adding Total segmentator
- Outreach
- Instructional material and tools documentation.
- Detailed instructions for all features shall be provided in a user manual.
- Written user guides and tutorials shall be available as HTML pages.
- Improve documentation for multi-person VR
- Attendance at meetings or workshops as required by NIAID
- Instructional material and tools documentation.
- Administration Submit monthly written reports of accomplishments
Part 2
- General ChimeraX improvements to support NIAID-specific requirements to assist NIAID personnel to transition away from the unsupported legacy Chimera program, e.g.:
- Energy minimization
- Support for MD analysis
- Improve the altloc explorer
- Rewrite ViewDockX
- Read VRML/X3D
- Support the NIH 3D pipeline development, including any changes to ChimeraX to support ongoing development
- Continuing support for NIH3D as needed
- Adding support for total segmentator
- Investigate adding support for ChimeraX sessions in NIH3D
- Both uploading and downloading
- Need to check on any possible security issues
- Continuing support for NIH3D as needed
- Extend virtual reality support
- Improve user experience in ChimeraX VR, e.g.
- Implement a VR ergonomic toolbar and Model panel user interface.
- Add support for a voice interface in VR mode
- Improve user experience in ChimeraX VR, e.g.
- Explore pedagogical benefits of ChimeraX in VR vs. flat screen
- Collaborate with the University of Indiana (Andi), UCSF, and NIAID to conduct a task analysis comparing VR vs. flat screen for understanding biological macromolecules
- Medical Images
- Implement new rendering and lighting modes for medical imagings
- Continue improvements to the DICOM reader by including more data types such as segmentations, and making it more robust by testing against the NCI TCIA repository.
- Improve VR experience for medical images
- Easier manipulation of windowing and leveling, especially for complex curves
- Particularly support for fine-grained changes
- Improve segmentation tool by adding commands to support multi-person VR
- Investigate adding support for 2D views in VR
- General usability improvements for using ChimeraX with medical images driven by TCIA data
- Support for automated segmentation
- Add support for ML-based tumor segmentation tool
- Outreach Instructional material and tools documentation.
- Detailed instructions for all features shall be provided in a user manual.
- Written user guides and tutorials shall be available as HTML pages.
- Create videos demonstrating new capabilities.
- Present webinar and workshop tutorials to train users on existing and new capabilities.
- Create video tutorials for how to use multi-person VR.
- Do outreach using VR in particular (live presentations)
- Improve documentation for multi-person VR
- Attendance at meetings or workshops as required by NIAID
- Administration
- Submit monthly written reports of accomplishments
Meeting Minutes
4/25/24
TomG, Eric, Zach, Scooter, Eric; Phil, Meghan, Andi Bueckle, Mike Bopf, Bhinnata
Phil: our "take your child to work day" activity was starting a ChimeraX VR meeting, and there was already a meeting name that suggested others had the same idea! Went well, kids enjoyed it. Meghan: we're tired from doing demos today. Andi: I have a VR demo (Apple Vision Pro) if we have time.
Phil: latest on working with passthrough? TomG: haven't changed it in a few weeks. Scooter tried it recently and found that a developer account is needed to change a necessary setting for passthrough. TomG: related to security.
Scooter: I made a long list of requests to TomG and Zach, can share with NIAID folks. I would bring up a menu to the right of where I was viewing the scene. Most importantly, I wanted multiple panels to stick together so they didn't have to be moved individually. Also I kept bumping my hand on the physical screen trying to select with the cone. It would be nice to have an interface for selecting from a distance, like a laser pointer. Resizing panels was confusing.
Phil: during one session on one machine, I would always get the checkbox below the one I was clicking on. Not very reproducible. It was the "show/hide" checkbox in the Model Panel. Meghan: maybe low battery on controller? TomG: there is a conversion between VR space location and desktop screen location, and maybe it was off by a certain vertical amount.
TomG: we recently added a button lockout so that you can hand over controls without messing everything up. Need a new daily build. Phil: now we can easily push the new version to multiple machines.
3D Workflow. Mike: how many visualization output file are we going to get for alphafold structures? Phil: that ball's in my court, to decide on which representations and their filenames. Will let Eric know to work on new presets. Phil: and then the script needs to take the outputs from the presets and write out the additional files.
Over to Andi for demo: sharing screen from Vision Pro, shows augmented reality of organs in shiny clear bubbles floating around his office. There are interaction panels, toggle switch to explode the organ with labels on all of its bits, and toggle back to put it back together. left/right on the other interaction panel chooses a different organ to focus on.
Discussion of VR user interface improvements, like showing the command line in the headset (hard to see via passthrough). Meghan: cisco upgrade will be very expensive. Scooter: wifi 7 has been released. Meghan: but will the headsets support it, and will we be allowed to use it here? Scooter: I just mean that you may want to investigate other options before paying cisco big bucks for an upgrade. TomG: in my experience, passthrough requires cable connection for good performance. I don't think it is a hardware issue (lack of wifi bandwidth), but a Meta software problem with using airlink. The developer option is to allow passthrough with questlink (the cable), doesn't mention airlink. TomG: you do sharing with larger numbers of headsets than we do; wifi 7 may help in that case. Meghan: a cable is a nuisance with rolling chairs and might get damaged.
All the rest of the notes from today is text that Meghan sent thru chat:
USB: 192.6Mbps peak on Google Earth at defaults (200Mbps) 188.2Mbps peak on ChimeraX at defaults (200Mbps) 413.3Mbps peak on ChimeraX with max set to 500Mbps 846.2Mbps peak on Medical Holodeck with max set to 960Mbps
Testing wired connection and pushing bitrates to the limit
Used Wireshark with USB-PCAP (requires special install file that gets moved into Wireshark program files) see https://desowin.org/usbpcap/tour.html
Next step: same testing wired/wireless focused on normal day-to-day use
Have a baseline to measure against
Torrey will investigate use of Cisco Meraki cloud-based WLC
4/11/24
TomG, Eric, Zach, Greg, Elaine, Scooter; Phil, Kristen, Meghan, Mike Bopf, Bhinnata
Zach: did you see the email I sent about a radiology conference later this year, are you going? Meghan: someone will go but we may not be hosting a booth. May be useful for you to attend just to learn, keep it in mind.
Scooter: we still don't have a contract finalized even though we were told it was fine to start the work. I'll email and ask for status.
Meghan: what's going on with workflows? Kristen: we need to figure out what we want re AlphaFold, still haven't gotten a list of specific requests. Quick submit is when the user enters an accession code (will be obtained using ChimeraX fetch) rather than uploading their own structure.
Scooter: we should talk about AlphaFold and GLTF. We would like to add a "publish to Schol-AR" feature in ChimeraX using REST interface. Developer Tyler Ard is a medical imaging guy and has given us some feedback about the ChimeraX tools. We would like to visit him at USC, give demos, discuss features, and try to engage his colleagues as well.
TomG: Meghan's powerpoint gltf example used an older ChimeraX. Newer ChimeraX (Oct 2023) gltf export has better treatment of color, so looks better in powerpoint. Discussion of whether the vrmls will then need correction since those converted from the older gltfs are fine. Meghan: let's test with newer ChimeraX. Phil: it looks good enough with newer ChimeraX that we don't need any special preset for gltf export for powerpoint.
Kristen: another possibility is color STL. Greg: recommend staying away from STL. Kristen, Meghan: or 3MF or USC (universal scene description). Kristen: there are various flavors, USC-Z etc.
TomG: should NIH3D put in lighting, or leave that to the end user's application? Kristen: probably better to leave it out so that you don't get stuck with both lighting from the exported file and the application, and no reasonable way to remove lights.
Discussion of lighting direction vs. camera direction. If there is a headlight you avoid dark sides of models, but that approach can overly wash out white or other light-colored models.
Phil: for AlphaFold, we probably want at least 2 colorings, by PAE domain and by pLDDT. Tom: pLDDT is well-known but PAE domain coloring is more specific to ChimeraX and may require more explanation or at least thought as to whether you really want to make it available. Parts of more than one chain can be assigned to the same PAE domain. Phil: for quick submit, it will be only one protein. My instinct is that we still want to include it for alphafold database entries, along with a good explanation. Phil: other issue is whether to hide low-pLDDT spaghetti. What is a reasonable cutoff? Tom: pLDDT 50. Tom: keep in mind that PAE values are in a separate file than the structure. ChimeraX alphafold fetch gets both files.
Current thinking is we need more presets: both with and without spaghetti, 2 alphafold coloring schemes, and all the existing color schemes of both ribbons and surfaces. Elaine: do you really need a combinatorially expanded set of presets, when some of these differences would just be a single command? Eric: how about a modifier preset? Then still simple but less duplicative. To get a specific view, you'd use preset A followed by modifier preset B.
Phil: do you need any more specifics? Eric: do you need missing-structure pseudobonds where the spaghetti is hidden or excised? What about surfaces? Will think about the implementation details. Tom: I have various doubts (a) that getting rid of the low pLDDT, can remove parts of helices, even buried cores, etc. and give weird results. (b) also, pLDDT coloring is useful for ribbons, probably not surfaces, have not seen it used for surfaces. Others: would simplify the preset situation to omit some of these options.
Eric: I'll work up some possibilities and put it in a bundle for further feedback.
3/28/24
Scooter, Eric, Zach, Elaine; Andi Bueckle, Phil, Kristen, Darrell, Meghan, Bhinnata
Some discussion of Andi's scary (almost-real-looking) avatar since he is Zooming using the Vision Pro. The varying amounts of transparency and the unrealistic mouth movements are the giveaway.
Phil: Kristen had an interesting idea to output files for Mesoscape. Scooter: I talked to some people at VIZBI... planning a collaboration for publishing data to Schol-AR; the developer (Tyler) happens to also be a radiologist and may be helpful in evaluating our medical image stuff. Discussed maybe having a ChimeraX REST interface to publish to Schol-AR. Kristen and Phil: we may be interested in adding a connection to Schol-AR from NIH3D.
Scooter: Eric has a present for you, especially Darrell. (Zoom technical difficulties, disconnected, reconnected.) Eric demonstrates worms, everybody is enthusiastic. Darrell: would be fun to try 3D printing. Might need struts. Kristen: is this in other programs? Elaine: yes, Pymol, Chimera, etc. Phil: re NIH3D I also wanted to discuss the quick-submit workflows for alphafold... we need to decide which are the standard outputs: pLDDT and PAE domain coloring, ribbons and surfaces, maybe hide low-pLDDT parts? Transparency? Elaine: pLDDT coloring shows low confidence as red, or it could be gray in combination with PAE domain coloring. In ChimeraX you can specify all residues with bfactor (pLDDT is in the bfactor field) greater than some value.
Scooter: this all ties into what metadata we want to have in the GLTF output. Groups of residues predefined, e.g. "high pLDDT" or by chain ID or domain, etc. Phil: Darrell, maybe you can help us decide on the standard set of outputs. Darrell: should be useful and include ones that are harder for some people to generate on their own, which used to be the case for ESP coloring. We'll have to put our heads together and decide on an edited set. Kristen: maybe we could offer checkbox choices of which outputs the user wants.
Phil: ChimeraX gltf outputs look washed out when embedded into Microsoft documents (Powerpoint and Word). Due to their oversaturated lighting model, which is unlikely to be addressed by Microsoft, so we have to try to work around it. I can generate ChimeraX gltf files that look better after embedding in such documents by using different settings in ChimeraX (I have my own preset for this, which uses "color modify" to lighten the colors) but that may require yet another set of output options. Kristen: I hacked Powerpoint to get around it but it's not trivial. See this in the forum for the Microsoft lighting: https://answers.microsoft.com/en-us/msoffice/forum/all/3d-model-lighting-inside-powerpoint/d0c0c316-8019-4c25-b0f6-86500e512f91 ... The suggested solution is what I've done in the past to fix the lighting.
Meghan posted another link in the chat https://answers.microsoft.com/en-us/msoffice/forum/all/powerpoint-uses-gltf-but-doesnt-support/002d1f4e-061d-4ab6-a692-ed217945724a
Meghan: are there ways to fix the gltf after it's output? Blender, but having to download Blender is another barrier. Kristen: maybe NIH3D could have a "optimize gltf for Microsoft documents" service that runs our own Blender. Phil, Darrell: it may be a useful utility. Darrell: does gltf outputs from ChimeraX include lights? Kristen: I just checked, and no, these gltf files do not include lights or camera.
Darrell: we might also look at providing U3D which can be embedded in PDFs. Greg: it hasn't been used much. Darrell: probably because it is rather difficult to generate. Meghan posted this link in the chat https://helpx.adobe.com/acrobat/using/adding-3d-models-pdfs-acrobat.html