Opened 4 years ago

Closed 4 years ago

Last modified 4 years ago

#4661 closed enhancement (fixed)

Update to Python 3.9

Reported by: Tom Goddard Owned by: Tom Goddard
Priority: major Milestone:
Component: Core Version:
Keywords: Cc: pett, Greg Couch, Tristan Croll
Blocked By: Blocking:
Notify when closed: Platform: all
Project: ChimeraX

Description

For the ChimeraX 1.3 release we probably should update from Python 3.8 to Python 3.9. Python 3.8 came out October 2019 more than a year and a half ago. Python 3.9 has been out since October 2020, 8 months ago, and is at 3.9.5 now.

Attachments (1)

openmm-7.5.1-linux-py39_cuda112_1.tar.bz2 (16.4 MB ) - added by Tristan Croll 4 years ago.
Added by email2trac

Change History (32)

comment:1 by Tom Goddard, 4 years ago

Tried compiling ChimeraX with Python 3.9.5 on macOS 10.15.7.

Problem installing lineprofiler, pytables, openmm, old h5py from emdb_sff. Details follow.

Build with those 4 packages omitted worked and molecule and map display worked.

PyPi lineprofiler failed to compile. Commented it out.

PyTables failed to compile. There are still no PyPi PyTables wheels for Python 3.9 for macOS and Windows as described in a PyTables ticket from October 2020 (https://github.com/PyTables/PyTables/issues/823). Compile from source by pip possibly failed due to know hdf5 installed. It looks like PyTables is no longer adequately maintained. Commented out pip install of tables.

OpenMM failed to install. Build script tries to copy python3.8/site-packages. Will need a Python 3.9 openmm packaged up from conda-forge (https://anaconda.org/conda-forge/openmm/files). Commented out openmm install.

The emdb segmentation file format ChimeraX bundle emdb_sff depends on PyPi sfftk-rw which depends on h5py version < 3. Current h5py versions is 3.2.1 and has Python 3.9 builds. But version 2.10 is used and there are only Python 3.8 builds. Compile from source failed. Commented out emdb_sff

comment:2 by pett, 4 years ago

What goes wrong with compiling lineprofiler?

comment:3 by Tom Goddard, 4 years ago

We are using an archaic version of lineprofiler 2.1.2, while current version is 3.2.6, and that is probably the main issue. I think we should drop this if it is so poorly maintained that we cannot easily produce current macOS and Windows versions. Here is our prereqs/lineprofiler/Makefile comment that explains why this ancient version is used.

# pypi is missing binary version for macOS and Windows.
# PEP517 does not allow install from source wheels and
# setuptools-45.1.0 enforces PEP517 by default.
# Version 3.0.2 is built using cmake, and that is
# a bridge too far. Sticking with 2.1.2.

Still true that only linux binaries are provided by PyPi.

Here's the error building on macOS 1015.7, missing gettimeofday().

$ cd prereqs/lineprofiler/
$ make install
tar zxf line_profiler-2.1.2.tar.gz -C /Users/goddard/ucsf/chimerax/build/tmp
# force _line_profiler.c to be regenerated by Cython
rm -f /Users/goddard/ucsf/chimerax/build/tmp/line_profiler-2.1.2/_line_profiler.c
cd /Users/goddard/ucsf/chimerax/build/tmp/line_profiler-2.1.2 && /Users/goddard/ucsf/chimerax/build/bin/python3.9 -I setup.py bdist_wheel
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-x86_64-3.9
copying line_profiler.py -> build/lib.macosx-10.9-x86_64-3.9
copying kernprof.py -> build/lib.macosx-10.9-x86_64-3.9
copying line_profiler_py35.py -> build/lib.macosx-10.9-x86_64-3.9
running build_ext
cythoning _line_profiler.pyx to _line_profiler.c
/Users/goddard/ucsf/chimerax/build/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /Users/goddard/ucsf/chimerax/build/tmp/line_profiler-2.1.2/_line_profiler.pyx

tree = Parsing.p_module(s, pxd, full_module_name)

building '_line_profiler' extension
creating build/temp.macosx-10.9-x86_64-3.9
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch x86_64 -g -I/Users/goddard/ucsf/chimerax/build/Library/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c _line_profiler.c -o build/temp.macosx-10.9-x86_64-3.9/_line_profiler.o
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch x86_64 -g -I/Users/goddard/ucsf/chimerax/build/Library/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c timers.c -o build/temp.macosx-10.9-x86_64-3.9/timers.o
timers.c:36:2: error: "This module requires gettimeofday() on non-Windows platforms!"
#error "This module requires gettimeofday() on non-Windows platforms!"


1 error generated.
error: command '/usr/bin/gcc' failed with exit code 1
make: * [install] Error 1

comment:4 by pett, 4 years ago

I did _occasionally_ use line_profiler, but I guess I will have to soldier on. Unsurprising that 2.1.2 doesn't compile / isn't maintained. Looking at their issue tracker for the current version, they would _like_ to provide Windows/Mac wheels but just don't have the programmer wherewithal.

comment:5 by Greg Couch, 4 years ago

Line profiler is actively maintained, the latest version is 3.2.6 from a few weeks ago. And there are line profiler binaries for Linux. There is a request by the line profiler author for binaries for other platforms. The build environment has lots of extra requirements, so it might be some work. That is why we stuck the older version. So we could do the work, or make it a linux-only option, or drop it.

comment:6 by Tom Goddard, 4 years ago

I submitted a request to update the EMDB segmentation file reader PyPi package sfftk-rw so that it can be installed with Python 3.9.

https://github.com/emdb-empiar/sfftk-rw/issues/6

comment:7 by Tom Goddard, 4 years ago

Paul Korir updated the sfftk-rw package and I tested in my ChimeraX Python 3.9 build that it works.

comment:8 by Tom Goddard, 4 years ago

I built pytables on macOS using pip install with source bundle with Python 3.9. First had to install brew hdf5 and c-blosc. Resulting pytables libraries depended on /usr/local/opt brew libraries. So I copied those into tables/.dylibs and used install_name_tool to fix dependent library paths. This worked in ChimeraX build with Python 3.9 saving and opening *.cmap files including with compression. I created prereqs/pytables to do this build.

comment:9 by Tom Goddard, 4 years ago

I tried Chris Gohlke's pytables windows wheel (https://www.lfd.uci.edu/~gohlke/pythonlibs/#pytables) with ChimeraX and Python 3.9 and that worked. I made prereqs/pytables/Makefile use that.

comment:10 by Tom Goddard, 4 years ago

Resolution: fixed
Status: assignedclosed

Done.

Updated ChimeraX from Python 3.8.10 to 3.9.6. OpenMM updated to 7.5.1 using CUDA 11.2 on Linux and Windows. PyTables using version built from source on Mac and wheel from Chris Gohlke on Windows and PyPi wheel on Linux.

in reply to:  11 ; comment:11 by Tristan Croll, 4 years ago

The attached OpenMM Linux build works for me with ISOLDE (the conda one won't even import in CentOS 7 since it requires GLIBCXX_3.4.20). I've attached an updated singularity recipe and build script to #3756.
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 16 July 2021 01:58
Cc: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------
Changes (by Tom Goddard):

 * status:  assigned => closed
 * resolution:   => fixed


Comment:

 Done.

 Updated ChimeraX from Python 3.8.10 to 3.9.6.  OpenMM updated to 7.5.1
 using CUDA 11.2 on Linux and Windows.  PyTables using version built from
 source on Mac and wheel from Chris Gohlke on Windows and PyPi wheel on
 Linux.

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:10>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

openmm-7.5.1-linux-py39_cuda112_1.tar.bz2

by Tristan Croll, 4 years ago

Added by email2trac

comment:12 by Tristan Croll, 4 years ago

From Peter Eastman on the OpenMM GitHub issues thread:

The CUDA runtime compiler is redistributable. You could bundle a copy of it instead of relying on the installed one.

I tried this out on my CentOS system, with CUDA 11.0 installed and the OpenMM-7.5.1-py39-cuda112 build in ChimeraX. That combination alone failed pretty horribly - the nvcc approach requires cuda_runtime.h, which it seems Nvidia in their infinite wisdom failed to include in their RedHat CUDA repo. Would probably work OK for people using CUDA from an installer downloaded direct from Nvidia, but pretty fragile. Anyway, after I downloaded libnvrtc.so.11.2 and libnvrc-builtins.so.11.2 and added them to the ChimeraX/lib directory CUDA simulations started and ran just fine. Better than just fine, actually - both simulation startup and run performance were about 50% faster than OpenCL.

comment:13 by Tom Goddard, 4 years ago

I replaced the OpenMM 7.5.1 in the Linux ChimeraX daily build with the one you have attached. Thanks for testing the previous version. I do not have a Linux system with an Nvidia graphics card to test.

Where did you download the CUDA 11.2 runtime libraries? Should we include these in Linux ChimeraX? How big are those libraries? What happens when the user updates to a future CUDA 11.3 -- will it cause problems?

in reply to:  15 comment:14 by Tristan Croll, 4 years ago

In CentOS 7, I got them from the cuda repo, which pulls from http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-nvrtc-11-2-11.2.152-1.x86_64.rpm. Roughly 50 MB once decompressed. The Windows directory at the same site (http://developer.download.nvidia.com/compute/cuda/repos/windows/x86_64/) seems to only have JSON files - I guess you'd have to just download the CUDA toolkit and extract them from there. I don't think updates to CUDA will matter as long as they're within the same major version, but I can check (probably on Monday) by updating my system to CUDA 11.4.
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 16 July 2021 21:33
Cc: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by Tom Goddard):

 I replaced the OpenMM 7.5.1 in the Linux ChimeraX daily build with the one
 you have attached.  Thanks for testing the previous version.  I do not
 have  a Linux system with an Nvidia graphics card to test.

 Where did you download the CUDA 11.2 runtime libraries?  Should we include
 these in Linux ChimeraX?  How big are those libraries?  What happens when
 the user updates to a future CUDA 11.3 -- will it cause problems?

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:13>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

in reply to:  16 ; comment:15 by Tristan Croll, 4 years ago

To be clear, these aren't the complete CUDA runtime libraries - just the runtime compiler. Interestingly enough, libnvrtc and libnvrtc-builtins don't depend on any other CUDA libraries at all:

ldd libnvrtc.so.11.2
linux-vdso.so.1 =>  (0x00007ffc77d74000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fa57c249000)
librt.so.1 => /lib64/librt.so.1 (0x00007fa57c041000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fa57be3d000)
libm.so.6 => /lib64/libm.so.6 (0x00007fa57bb3b000)
libc.so.6 => /lib64/libc.so.6 (0x00007fa57b76d000)
/lib64/ld-linux-x86-64.so.2 (0x00007fa57f135000)

ldd libnvrtc-builtins.so.11.2
linux-vdso.so.1 =>  (0x00007ffd6fbf8000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f39f4224000)
libm.so.6 => /lib64/libm.so.6 (0x00007f39f3f22000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f39f3d0c000)
libc.so.6 => /lib64/libc.so.6 (0x00007f39f393e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f39f4d03000)
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 16 July 2021 21:51
To: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>
Cc: pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by Tristan Croll):

 {{{
 In CentOS 7, I got them from the cuda repo, which pulls from
 http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-
 nvrtc-11-2-11.2.152-1.x86_64.rpm. Roughly 50 MB once decompressed. The
 Windows directory at the same site
 (http://developer.download.nvidia.com/compute/cuda/repos/windows/x86_64/)
 seems to only have JSON files - I guess you'd have to just download the
 CUDA toolkit and extract them from there. I don't think updates to CUDA
 will matter as long as they're within the same major version, but I can
 check (probably on Monday) by updating my system to CUDA 11.4.
 ________________________________
 From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
 Sent: 16 July 2021 21:33
 Cc: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>; gregc@cgl.ucsf.edu
 <gregc@cgl.ucsf.edu>; pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; Tristan Croll
 <tic20@cam.ac.uk>
 Subject: Re: [ChimeraX] #4661: Update to Python 3.9

 #4661: Update to Python 3.9
 ----------------------------------+-------------------------
           Reporter:  Tom Goddard  |      Owner:  Tom Goddard
               Type:  enhancement  |     Status:  closed
           Priority:  major        |  Milestone:
          Component:  Core         |    Version:
         Resolution:  fixed        |   Keywords:
         Blocked By:               |   Blocking:
 Notify when closed:               |   Platform:  all
            Project:  ChimeraX     |
 ----------------------------------+-------------------------

 Comment (by Tom Goddard):

  I replaced the OpenMM 7.5.1 in the Linux ChimeraX daily build with the
 one
  you have attached.  Thanks for testing the previous version.  I do not
  have  a Linux system with an Nvidia graphics card to test.

  Where did you download the CUDA 11.2 runtime libraries?  Should we
 include
  these in Linux ChimeraX?  How big are those libraries?  What happens when
  the user updates to a future CUDA 11.3 -- will it cause problems?

 --
 Ticket URL:
 <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:13>
 ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
 ChimeraX Issue Tracker
 }}}

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:14>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

in reply to:  17 ; comment:16 by Tristan Croll, 4 years ago

... which in turn reveals that libnvrtc-builtins isn't actually needed - everything still works fine if I remove it. The remaining libnvrtc is 44MB.
________________________________
From: Tristan Croll <tic20@cam.ac.uk>
Sent: 16 July 2021 21:55
To: ChimeraX-bugs@cgl.ucsf.edu <ChimeraX-bugs@cgl.ucsf.edu>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

To be clear, these aren't the complete CUDA runtime libraries - just the runtime compiler. Interestingly enough, libnvrtc and libnvrtc-builtins don't depend on any other CUDA libraries at all:

ldd libnvrtc.so.11.2
linux-vdso.so.1 =>  (0x00007ffc77d74000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fa57c249000)
librt.so.1 => /lib64/librt.so.1 (0x00007fa57c041000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fa57be3d000)
libm.so.6 => /lib64/libm.so.6 (0x00007fa57bb3b000)
libc.so.6 => /lib64/libc.so.6 (0x00007fa57b76d000)
/lib64/ld-linux-x86-64.so.2 (0x00007fa57f135000)

ldd libnvrtc-builtins.so.11.2
linux-vdso.so.1 =>  (0x00007ffd6fbf8000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f39f4224000)
libm.so.6 => /lib64/libm.so.6 (0x00007f39f3f22000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f39f3d0c000)
libc.so.6 => /lib64/libc.so.6 (0x00007f39f393e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f39f4d03000)
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 16 July 2021 21:51
To: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>
Cc: pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by Tristan Croll):

 {{{
 In CentOS 7, I got them from the cuda repo, which pulls from
 http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-
 nvrtc-11-2-11.2.152-1.x86_64.rpm. Roughly 50 MB once decompressed. The
 Windows directory at the same site
 (http://developer.download.nvidia.com/compute/cuda/repos/windows/x86_64/)
 seems to only have JSON files - I guess you'd have to just download the
 CUDA toolkit and extract them from there. I don't think updates to CUDA
 will matter as long as they're within the same major version, but I can
 check (probably on Monday) by updating my system to CUDA 11.4.
 ________________________________
 From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
 Sent: 16 July 2021 21:33
 Cc: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>; gregc@cgl.ucsf.edu
 <gregc@cgl.ucsf.edu>; pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; Tristan Croll
 <tic20@cam.ac.uk>
 Subject: Re: [ChimeraX] #4661: Update to Python 3.9

 #4661: Update to Python 3.9
 ----------------------------------+-------------------------
           Reporter:  Tom Goddard  |      Owner:  Tom Goddard
               Type:  enhancement  |     Status:  closed
           Priority:  major        |  Milestone:
          Component:  Core         |    Version:
         Resolution:  fixed        |   Keywords:
         Blocked By:               |   Blocking:
 Notify when closed:               |   Platform:  all
            Project:  ChimeraX     |
 ----------------------------------+-------------------------

 Comment (by Tom Goddard):

  I replaced the OpenMM 7.5.1 in the Linux ChimeraX daily build with the
 one
  you have attached.  Thanks for testing the previous version.  I do not
  have  a Linux system with an Nvidia graphics card to test.

  Where did you download the CUDA 11.2 runtime libraries?  Should we
 include
  these in Linux ChimeraX?  How big are those libraries?  What happens when
  the user updates to a future CUDA 11.3 -- will it cause problems?

 --
 Ticket URL:
 <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:13>
 ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
 ChimeraX Issue Tracker
 }}}

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:14>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

comment:17 by Tom Goddard, 4 years ago

So one possibility is that we can include CentOS 7 libnvrtc.so from CUDA 11.2 in ChimeraX. I understand that works well for you. But I am reluctant to do it because it seems like we are searching for a CUDA compatibility solution like blind men. What are the known best practices for using libraries that depend on CUDA? Does Nvidia expect that users must have the exact minor version of CUDA installed that is needed by an application? Can different versions of CUDA be installed at the same time? If not, then requiring an exact match is obviously unworkable when multiple applications were compiled with different versions.

I have not spent any time researching CUDA compatibility. In the OpenMM ticket on Python 3.9 I have been trying to encourage the OpenMM project to document how they expect CUDA compatibility to work, since they are distributing a CUDA library with tons of different CUDA versions. But they do not seem interested. Until someone researches and figures out best practices for CUDA use I am inclined not to muck around with trial and error in ChimeraX, for instance, including extra runtime libraries. Besides OpenMM developers, I think you Tristan may have a strong interest in CUDA and could do a more thorough investigation. But if you do not have time I can put it on my list. Since CUDA is an extremely small part of ChimeraX it may be a long time before I get to research this.

in reply to:  19 comment:18 by Tristan Croll, 4 years ago

From https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility:

"The CUDA Driver API has a versioned C-style ABI, which guarantees that applications that were running against an older driver (for example CUDA 3.2) will still run and function correctly against a modern driver (for example one shipped with CUDA 11.0). This is a stronger contract than an API guarantee - an application might need to change its source when recompiling against a newer SDK, but replacing the driver with a newer version will always work."

...

"

On the other hand, the CUDA runtime has not provided either source or binary compatibility guarantees. Newer major and minor versions of the CUDA runtime have frequently changed the exported symbols, including their version or even their availability, and the dynamic form of the library has its shared object name (.SONAME in Linux-based systems) change every minor version.

If your application dynamically links against the CUDA 10.2 runtime, it will only work in the presence of the dynamic 10.2 CUDA runtime on the target system."




This looks to be good news in the case of OpenMM. The only part of the CUDA runtime it links against is libnvrtc - all the other linked libraries are driver libraries. So Nvidia guarantees that anything compiled with libnvrtc.so.11.2 should work with any newer driver library.

CUDA Compatibility :: GPU Deployment and Management Documentation - NVIDIA Documentation Center<https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility>
In this document, we introduce two key features of CUDA compatibility: First introduced in CUDA 10, the CUDA Forward Compatible Upgrade is designed to allow users to get access to new CUDA features and run applications built with new CUDA releases on systems with older installations of the NVIDIA datacenter GPU driver.
docs.nvidia.com


________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 16 July 2021 22:50
Cc: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by Tom Goddard):

 So one possibility is that we can include CentOS 7 libnvrtc.so from CUDA
 11.2 in ChimeraX.  I understand that works well for you.  But I am
 reluctant to do it because it seems like we are searching for a CUDA
 compatibility solution like blind men.  What are the known best practices
 for using libraries that depend on CUDA?  Does Nvidia expect that users
 must have the exact minor version of CUDA installed that is needed by an
 application?  Can different versions of CUDA be installed at the same
 time?  If not, then requiring an exact match is obviously unworkable when
 multiple applications were compiled with different versions.

 I have not spent any time researching CUDA compatibility.  In the OpenMM
 ticket on Python 3.9 I have been trying to encourage the OpenMM project to
 document how they expect CUDA compatibility to work, since they are
 distributing a CUDA library with tons of different CUDA versions.  But
 they do not seem interested.  Until someone researches and figures out
 best practices for CUDA use I am inclined not to muck around with trial
 and error in ChimeraX, for instance, including extra runtime libraries.
 Besides OpenMM developers, I think you Tristan may have a strong interest
 in CUDA and could do a more thorough investigation.  But if you do not
 have time I can put it on my list.  Since CUDA is an extremely small part
 of ChimeraX it may be a long time before I get to research this.

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:17>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

comment:19 by Tom Goddard, 4 years ago

So it sounds like CUDA distinguishes runtime libraries from drivers and you need an exact match for runtime library minor version, but the corresponding driver version or a newer driver will work.

From this it sounds like the configuration you tried of 11.2 runtime with 11.0 driver was not guaranteed to work but worked by chance since the driver is older than the runtime. And missing the 11.2 runtime you said it fails in a confusing way, not simply a link error, which does not make sense since the runtime libnvrtc.so.N with the right N should not have been available.

When a user installs CUDA 11.4 say, does it include the 11.2, 11.1, 11.0 runtimes? Does it include the 10.2, 10.1, 10.0 runtimes? Can the various runtimes all be simultaneously installed together with the 11.4 driver and runtime?

You observed that the CUDA 11.2 runtime was much faster (even with the 11.0 driver that it is not guaranteed to work with). So what I am wondering is can I tell the user "To use ChimeraX OpenMM install the CUDA 11.2 driver or newer." Or do I have to say "Install the 11.2 driver or newer, and also the 11.2 runtime"? And will that keep all their own CUDA runtimes so all their CUDA apps using older CUDA runtimes will keep working?

comment:20 by Tom Goddard, 4 years ago

"...will that keep all their *old* CUDA runtimes..."

in reply to:  22 comment:21 by Tristan Croll, 4 years ago

My understanding is that if your program links against a CUDA runtime library, then you're generally expected to bundle that library with your distribution. In the case of OpenMM the only runtime library used is libnvrtc, and for their official distributions its provision is taken care of by conda. The version of libnvrtc used puts a floor on the required driver library version (driver must be same or later version to the runtime), but no ceiling.
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 17 July 2021 00:42
Cc: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by Tom Goddard):

 "...will that keep all their *old* CUDA runtimes..."

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:20>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

in reply to:  23 ; comment:22 by goddard@…, 4 years ago

It sounds like there is some uncertainty about whether applications are supposed to bundle in CUDA libraries.  But if that is the case then it would seem like the OpenMM distribution should be including the library.  Maybe OpenMM effectively does that by specifying a Conda dependency.  But for applications like ChimeraX that are not part of the Conda ecosystem it is a problem.  So it seems to come back to the long neglected problem that OpenMM is only distributed for Conda environments.

comment:23 by Tristan Croll, 4 years ago

As I discussed in https://github.com/openmm/openmm/issues/3174#issuecomment-882585598, if libnvrtc and libcudart are bundled in (~45MB total), then it becomes quite straightforward to check if the installed CUDA version is sufficient and to warn the user if not. Quick and dirty example Python code below. Have made the suggestion that it might be good to do something like this in OpenMM itself.

def version_error(required_major_version, required_minor_version):
    raise RuntimeError(f'CUDA driver is too old. Please update to at least {required_major_version}.{required_minor_version}') 


import ctypes

nvrtc = ctypes.CDLL('libnvrtc.so')
major_rtc = ctypes.c_int32()
minor_rtc = ctypes.c_int32()
nvrtc.nvrtcVersion(ctypes.byref(major_rtc), ctypes.byref(minor_rtc))
major_rtc = major_rtc.value
minor_rtc= minor_rtc.value


cudart = ctypes.CDLL('libcudart.so')
cuda_ver = ctypes.c_int32()
err_code = cudart.cudaDriverGetVersion(ctypes.byref(cuda_ver))
if err_code == 35: #cudaErrrorInsufficientVersion
    version_error(major_rtc, minor_rtc)
elif err_code != 0:
    raise RuntimeError(f'Failed querying CUDA version. Is CUDA (>={major_rtc}.{minor_rtc}) installed correctly?')
cuda_ver = cuda_ver.value
major_ver = cuda_ver//1000
minor_ver = (cuda_ver-major_ver*1000)//10

if major_rtc > major_ver or minor_rtc > minor_ver:
    version_error(major_rtc, minor_rtc)

in reply to:  25 comment:24 by goddard@…, 4 years ago

Yes, it would of course be good for OpenMM to check CUDA compatibility and to bundle CUDA libraries if that is the norm for distributing a library like OpenMM that uses CUDA.  Of course, it sounds like the CUDA dependency has already been taken care of within the Conda environment.  So the question comes down to the long standing one of whether OpenMM is going to support any other use besides Conda.


in reply to:  26 ; comment:25 by Tristan Croll, 4 years ago

My plan was to start looking into that this week. Got sidetracked today into driver hell (inherited an RTX2080 laptop whose touchpad insists it's actually a PS/2 mouse - which means Windows won't disable it when an external mouse is connected... annoyingly, after literally hours of frustration I still don't have a solution), but will definitely get into it properly tomorrow.
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 19 July 2021 19:04
To: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>
Cc: pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by goddard@…):

 {{{
 Yes, it would of course be good for OpenMM to check CUDA compatibility and
 to bundle CUDA libraries if that is the norm for distributing a library
 like OpenMM that uses CUDA.  Of course, it sounds like the CUDA dependency
 has already been taken care of within the Conda environment.  So the
 question comes down to the long standing one of whether OpenMM is going to
 support any other use besides Conda.


 }}}

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:24>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

in reply to:  27 ; comment:26 by Tristan Croll, 4 years ago

Progress on the OpenMM PyPi front: successfully built an OpenMM wheel, installed it into ChimeraX, and ran a simulation with it (see https://github.com/openmm/openmm/issues/2871#issuecomment-883443633). Hoping to enlist their help to integrate these builds into their continuous integration pipelines... watch this space.
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 19 July 2021 19:13
To: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>
Cc: pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by Tristan Croll):

 {{{
 My plan was to start looking into that this week. Got sidetracked today
 into driver hell (inherited an RTX2080 laptop whose touchpad insists it's
 actually a PS/2 mouse - which means Windows won't disable it when an
 external mouse is connected... annoyingly, after literally hours of
 frustration I still don't have a solution), but will definitely get into
 it properly tomorrow.
 ________________________________
 From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
 Sent: 19 July 2021 19:04
 To: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>
 Cc: pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; gregc@cgl.ucsf.edu
 <gregc@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
 Subject: Re: [ChimeraX] #4661: Update to Python 3.9

 #4661: Update to Python 3.9
 ----------------------------------+-------------------------
           Reporter:  Tom Goddard  |      Owner:  Tom Goddard
               Type:  enhancement  |     Status:  closed
           Priority:  major        |  Milestone:
          Component:  Core         |    Version:
         Resolution:  fixed        |   Keywords:
         Blocked By:               |   Blocking:
 Notify when closed:               |   Platform:  all
            Project:  ChimeraX     |
 ----------------------------------+-------------------------

 Comment (by goddard@…):

  {{{
  Yes, it would of course be good for OpenMM to check CUDA compatibility
 and
  to bundle CUDA libraries if that is the norm for distributing a library
  like OpenMM that uses CUDA.  Of course, it sounds like the CUDA
 dependency
  has already been taken care of within the Conda environment.  So the
  question comes down to the long standing one of whether OpenMM is going
 to
  support any other use besides Conda.


  }}}

 --
 Ticket URL:
 <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:24>
 ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
 ChimeraX Issue Tracker
 }}}

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:25>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

in reply to:  28 ; comment:27 by goddard@…, 4 years ago

Nice progress!  It sounds like you eliminated the "simtk" module -- I guess now you intend to use just "import openmm".  This seems like a bad idea making the PyPi OpenMM use different import statements than the Conda OpenMM.

When OpenMM wheels become available I will be happy to switch to using those in the ChimeraX build.

I guess you figured out a way to package the OpenMM shared libraries in the OpenMM package and have the runtime linking find them.  The runtime linking issues are different on Linux, Windows and Mac so that will need to be worked out on all platforms.  Currently in ChimeraX I install OpenMM in an unpleasant way with the OpenMM libraries in the application "lib" directory.  That needs to be change so that we can in the future distribute a ChimeraX wheel that can be used without having the ChimeraX application.


in reply to:  29 ; comment:28 by Tristan Croll, 4 years ago

I didn't eliminate simtk - that's something up-and-coming for OpenMM 7.6, and is already in the master branch. They've left a simtk stub behind, but it doesn't seem to be working properly - some of the imports failed. Will take that up with them.

The shared libraries are now placed in site-packages/openmm/lib (and I've also put the include files in site-packages/openmm/include). Their locations can be queried at runtime with:

from openmm import version
lib_dir = version.openmm_library_dir
inc_dir = version.openmm_include_dir

When/if this becomes standard, would be great if these could be automatically queried by bundle_builder when the dependency is declared.
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 20 July 2021 18:02
To: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>
Cc: pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by goddard@…):

 {{{
 Nice progress!  It sounds like you eliminated the "simtk" module -- I
 guess now you intend to use just "import openmm".  This seems like a bad
 idea making the PyPi OpenMM use different import statements than the Conda
 OpenMM.

 When OpenMM wheels become available I will be happy to switch to using
 those in the ChimeraX build.

 I guess you figured out a way to package the OpenMM shared libraries in
 the OpenMM package and have the runtime linking find them.  The runtime
 linking issues are different on Linux, Windows and Mac so that will need
 to be worked out on all platforms.  Currently in ChimeraX I install OpenMM
 in an unpleasant way with the OpenMM libraries in the application "lib"
 directory.  That needs to be change so that we can in the future
 distribute a ChimeraX wheel that can be used without having the ChimeraX
 application.


 }}}

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:27>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

in reply to:  30 ; comment:29 by goddard@…, 4 years ago

Dropping the simtk module, putting the shared libraries in the module, and including the header files in the distribution are great improvements.  Thanks!

in reply to:  31 ; comment:30 by Tristan Croll, 4 years ago

Weirdly, the backward-compatibility `from simtk.openmm...` imports only seem to break under some circumstances - I was getting tracebacks during ISOLDE's bundle initialization on ChimeraX startup, but doing the same imports in the shell works fine. Not a huge issue in the grand scheme of things, of course.
________________________________
From: ChimeraX <ChimeraX-bugs-admin@cgl.ucsf.edu>
Sent: 20 July 2021 18:16
To: goddard@cgl.ucsf.edu <goddard@cgl.ucsf.edu>
Cc: pett@cgl.ucsf.edu <pett@cgl.ucsf.edu>; gregc@cgl.ucsf.edu <gregc@cgl.ucsf.edu>; Tristan Croll <tic20@cam.ac.uk>
Subject: Re: [ChimeraX] #4661: Update to Python 3.9

#4661: Update to Python 3.9
----------------------------------+-------------------------
          Reporter:  Tom Goddard  |      Owner:  Tom Goddard
              Type:  enhancement  |     Status:  closed
          Priority:  major        |  Milestone:
         Component:  Core         |    Version:
        Resolution:  fixed        |   Keywords:
        Blocked By:               |   Blocking:
Notify when closed:               |   Platform:  all
           Project:  ChimeraX     |
----------------------------------+-------------------------

Comment (by goddard@…):

 {{{
 Dropping the simtk module, putting the shared libraries in the module, and
 including the header files in the distribution are great improvements.
 Thanks!

 }}}

--
Ticket URL: <https://www.rbvi.ucsf.edu/trac/ChimeraX/ticket/4661#comment:29>
ChimeraX <https://www.rbvi.ucsf.edu/chimerax/>
ChimeraX Issue Tracker

in reply to:  32 ; comment:31 by goddard@…, 4 years ago

If OpenMM 7.6 is making a top level openmm Python package (import openmm) then I am all for changing our imports to use that and not use import simtk.openmm.

Note: See TracTickets for help on using tickets.