Opened 7 years ago

Last modified 7 years ago

#1762 assigned enhancement

Blend multichannel volume data on GPU

Reported by: Tom Goddard Owned by: Tom Goddard
Priority: moderate Milestone:
Component: Volume Data Version:
Keywords: Cc:
Blocked By: Blocking:
Notify when closed: Platform: all
Project: ChimeraX

Description

Would like to be able to blend multichannel volume data by keeping each volume on the GPU and a colormap 1D texture for each volume. This currently works for single channel colormapping using the volume command colormapOnGpu true option, and it gives much better appearance interpolating map values rather than colors in some cases such as lung CT scan airways.

A difficulty is that if rendering is done in a single pass then each texture needs to be bound to a texture unit. Each channel will need the map data and colormap as textures, so 2 textures per channel. Some texture units are needed for shadows, and only 16 texture units minimum are guaranteed by OpenGL and high-end graphics currently has 32 texture units (eg GTX 1080 on Windows). So only a limited number of channels (maybe 6) could be handled this way. If more channels are needed then a fallback to multipass or CPU rendering would be needed. Handling segmentations as separate channels can easily produce more than 6 channels (although that might better be handled with one label-map segmentation).

For handling any number of channels a multipass approach might be better where each channel is blended on pass at a time into a framebuffer texture with destination alpha, and the result then rendered to the scene framebuffer. This may be much slower as it involves shader switches for every plane. A faster and simpler implementation could order the channels and blend directly to the scene framebuffer. The appearance would not be the same, for instance if 3 opaque channels were overlaid in one plane, only the last channel would be visible, rather than blending the colors. Many multichannel rendering systems only allow red/green/blue 3 channel rendering, and displaying more channels is often not useful since there are only 3 independent color components. So maybe just supporting on GPU one pass blending up to 6 channels would be perfectly adequate. Rendering more than 6 channels could fallback to cpu and log a warning.

Change History (2)

comment:1 by Tom Goddard, 7 years ago

Implemented blending on GPU that creates blended textures from the source textures. Then the blended textures are used for rendering. I did not try blending the source textures directly in one pass in the fragment shader which would avoid the need for blended textures and give better per-fragment interpolation, but would also have slower rendering. The implemented GPU rendering handles 2D and 3D textures and colormapping on the GPU or CPU.

I added a volume command option blendOnGpu (true | false, default false) that controls whether blending is done on the GPU or CPU. I think it will always be faster and a better choice to blend on the CPU but have left it off by default until more testing shows it is not too buggy.

The speed-up changing colors on a 3-channel 1024 x 1024 x 81 (16 bit values, 160 Mbyte image) when changing colors for a single channel was about 20 frames/sec on GPU (iMac Radeon Pro 580) versus 1 frame/sec on CPU.

comment:2 by Tom Goddard, 7 years ago

It would be worth trying per-fragment blending on the GPU with colormaps on the GPU to allow per-fragment data value interpolation. This gave much better quality appearance on CT scans showing lung airways. This rendering approach will no doubt give lower frame rates since 2*nchannel texture lookups will be done per fragment instead of 1 texture lookup.

Note: See TracTickets for help on using tickets.