Volume Rendering using VTK and Python

Introduction

Scientific visualization technology uses computer graphics, image processing, computer vision and other methods to convert symbols and digital information in the calculation and measurement processes of science, engineering, medicine, etc. into intuitive graphic images and display them on the screen. Techniques and Methods.

Volume rendering is a technical direction in the field of scientific visualization. As mentioned before, the goal of volume rendering is to display spatial volume details in an image. For example, there is a house in front of you. There are furniture and appliances in the house. Standing outside the house, you can only see the external shape, but you cannot observe the layout of the house or the objects in the house. Assume that the house and the objects in the house are semi-circular. Transparent so you can see all the details at the same time. This is what volume rendering is intended to achieve.

The basic flow of volume rendering is as follows.
Insert image description here
To complete the volume rendering work of this article, you must first download the brain image data set. Download the brain image data set . Use jupyter notebook to write and run python code.

Volume Rendering

We'll start by loading and converting the label field in the dataset so that it is compatible with the volume mapping class that will be used. Then define the color and opacity transfer functions, which are the most important part of the entire volume rendering process, because these functions define the appearance of the final render. Once these are defined, volume rendering using some of the different volume mapping classes provided by VTK will be demonstrated and their results will be shown.

1. Imports

import os
import numpy
import vtk

Among them, vtk can be installed using pip in Jupyter Notebook.

2. Helper-functions

Several helper functions are defined below:

(1) vtk_show()
This function allows passing a vtkRenderer object and obtaining the rendered PNG image output, which is compatible with IPython Notebook cell output.

from IPython.display import Image
def vtk_show(renderer, width=400, height=300):
    """
    Takes vtkRenderer instance and returns an IPython Image with the rendering.
    """
    renderWindow = vtk.vtkRenderWindow()
    renderWindow.SetOffScreenRendering(1)
    renderWindow.AddRenderer(renderer)
    renderWindow.SetSize(width, height)
    renderWindow.Render()
     
    windowToImageFilter = vtk.vtkWindowToImageFilter()
    windowToImageFilter.SetInput(renderWindow)
    windowToImageFilter.Update()
     
    writer = vtk.vtkPNGWriter()
    writer.SetWriteToMemory(1)
    writer.SetInputConnection(windowToImageFilter.GetOutputPort())
    writer.Write()
    data = str(buffer(writer.GetResult()))
    
    return Image(data)

(2) createDummyRenderer()
This function simply creates a vtkRenderer object, sets some basic properties, and configures the camera for the rendering purposes of this article. Since we will be rendering several different scenes, it is simpler to create a new renderer and scene for each case than to keep removing or adding actors, making each render independent of its previous one.

def createDummyRenderer():
    renderer = vtk.vtkRenderer()
    renderer.SetBackground(1.0, 1.0, 1.0)

    camera = renderer.MakeCamera()
    camera.SetPosition(-256, -256, 512)
    camera.SetFocalPoint(0.0, 0.0, 255.0)
    camera.SetViewAngle(30.0)
    camera.SetViewUp(0.46, -0.80, -0.38)
    renderer.SetActiveCamera(camera)
    
    return renderer

(3) Two lambda expressions
are used to quickly convert a list or a tuple into a numpy n-dimensional array, and vice versa.

l2n = lambda l: numpy.array(l)
n2l = lambda n: list(n)

3.Options

Define some variables at the beginning of the Jupyter Notebook:

# Path to the .mha file
filenameSegmentation = "./nac_brain_atlas/brain_segmentation.mha"

# Path to colorfile.txt 
filenameColorfile = "./nac_brain_atlas/colorfile.txt"

# Opacity of the different volumes (between 0.0 and 1.0)
volOpacityDef = 0.25
  • filenameSegmentation The location where the .mha file is saved; this file contains both header and binary image data in the same file
  • filenameColorfile The location to save the .txt file; this file is the accompanying color file, which is essentially a CSV file that lists each index in the label field along with the name of the brain structure represented and the recommended RGB color.
  • volOpacityDef comes into play later when we define the opacity transfer function for the volume mapper, it is the baseline opacity for all rendered brain structures.

4. Image-Data Input

Load the tag field according to the provided .mha file.
VTK supports reading uncompressed MetaImage files in .mha format through the vtkMetaImageReader class.

reader = vtk.vtkMetaImageReader()
reader.SetFileName(filenameSegmentation)

castFilter = vtk.vtkImageCast()
castFilter.SetInputConnection(reader.GetOutputPort())
castFilter.SetOutputScalarTypeToUnsignedShort()
castFilter.Update()

imdataBrainSeg = castFilter.GetOutput()

After reading, vtkImageCast creates a new object castFilter below and connects its input to the reader output, providing it with the image. The key then is to call the appropriate method to set the desired data type of the output image. In our case, we want the output image to be of type unsigned short, so we call the SetOutputScalarTypeToUnsignedShort method. castFilter then calls Update, reader to get the "raw" image data and convert it to type unsigned short. Finally retrieve this data via GetOutput and store it in the imdataBrainSeg variable.

5. Prep-work

(1) Transfer functions: Read colorfile into a dictionary
The appearance of any volume rendering in VTK boils down to defining transfer functions. These transfer functions tell the volume mapper what color and opacity to give each pixel in the output.

import csv
fid = open(filenameColorfile, "r")
reader = csv.reader(fid)

dictRGB = {
    
    }
for line in reader:
    dictRGB[int(line[0])] = [float(line[2])/255.0,
                             float(line[3])/255.0,
                             float(line[4])/255.0]
fid.close()

Loop through each entry read from the color file and create a dictionary dictRGB where the index label acts as the key and the value is a list containing the RGB colors assigned to the organization.
Insert image description here

(2) Transfer functions: Define color transfer function
defines a color transfer function. This is an instance of the vtkColorTransferFunction class and acts as a map of scalar pixel values ​​to a given RBG color.

funcColor = vtk.vtkColorTransferFunction()

for idx in dictRGB.keys():
    funcColor.AddRGBPoint(idx, 
                          dictRGB[idx][0],
                          dictRGB[idx][1],
                          dictRGB[idx][2])

First create a new vtkColorTransferFunction funcColor which allows us to create the tag index color map we discussed earlier. We then simply iterate over all the keys in the dictRGB dictionary we created earlier, i.e. all the label indexes, and use the AddRGBPoint method to add the point with that label index and matching RGB color.

(3) Transfer functions: Define scalar opacity transfer function
A scalar opacity function needs to be defined below. We'll use this to simply match each label with an opacity value. Since we do not have predefined opacity values ​​for different organizations, set all opacity to a single value defined in the variable volOpacityDef:

funcOpacityScalar = vtk.vtkPiecewiseFunction()

for idx in dictRGB.keys():
    funcOpacityScalar.AddPoint(idx, volOpacityDef if idx<>0 else 0.0)

Note that the opacity function is defined via vtkPiecewiseFunction, since it just defines a 1-1 mapping. Also note the use of the AddPoint method to add a new point. Notice that the transparency of the 0 label is set to 0.0. This means we are making the background, the black empty space around the split, completely invisible (otherwise we would just see a giant black block around the render).

(4) Transfer functions: Define gradient opacity transfer function
The scalar opacity function simply assigns an opacity value to each pixel intensity. However, this often results in a rather homogeneous-looking rendering where the outer tissue dominates the image. This is where the gradient opacity function comes into play. With such a function we map a scalar spatial gradient (i.e. how much a scalar changes in space) to an opacity multiplier. These gradients tend to become smaller as they "travel" through homogeneous regions and become larger as they travel through different tissues. So, with a feature like this, we can make the "inside" of an organization quite transparent while making the boundaries between organizations more prominent, giving a clearer picture of the entire volume.

funcOpacityGradient = vtk.vtkPiecewiseFunction()

funcOpacityGradient.AddPoint(1,   0.0)
funcOpacityGradient.AddPoint(5,   0.1)
funcOpacityGradient.AddPoint(100,   1.0)

This function is also defined through a vtkPiecewiseFunction object. Through this function, pixels with a low gradient higher than 1.0 will have their opacity multiplied by 0.0, pixels with a gradient between 1 and 5 will have their opacity multiplied by 0.0~0.1, and pixels with a gradient higher than 5 will have their opacity multiplied by 0.0. Multiply by a number greater than 1.

(5) Volume Properties
The basic properties of the volume are defined below, and a vtkVolumeProperty object is created and configured, which "represents the general properties of the rendering volume":

propVolume = vtk.vtkVolumeProperty()
propVolume.ShadeOff()
propVolume.SetColor(funcColor)
propVolume.SetScalarOpacity(funcOpacityScalar)
propVolume.SetGradientOpacity(funcOpacityGradient)
propVolume.SetInterpolationTypeToLinear()

Through the SetColor, SetScalarOpacity and SetGradientOpacity methods, assign the three transfer functions funcColor, funcOpacityScalar and funcOpacityGradient we defined previously to the volume attribute. Finally, the volume is interpolated. Here, nearest neighbor interpolation is selected, set by the SetInterpolationTypeToNearest method, and linear interpolation is set by the SetInterpolationTypeToLinear method. Usually, when dealing with discrete data, like the label field in the case, we choose nearest neighbor interpolation because then we don't introduce "new" values ​​that don't match any organization.

6. Volume Rendering

(1) vtkVolumeRayCastMapper
vtkVolumeRayCastMapper is the "classic" volume rendering class in VTK, which promotes itself as "a slow but accurate mapper for volume rendering". As the name suggests, the vtkVolumeRayCastMapper class performs volume rendering by casting rays through a suitable raycasting function of type vtkVolumeRayCastFunction.

vtkVolumeRayCastMapper includes the following three types:

  • vtkVolumeRayCastCompositeFunction: Composites along the ray according to the properties stored in the volume's vtkVolumeProperty.
  • vtkVolumeRayCastMIPFunction: Calculates the maximum value encountered along the ray.
  • vtkVolumeRayCastIsosurfaceFunction: Intersects a ray with an analytical isosurface in a scalar field.

In this example, select vtkVolumeRayCastCompositeFunction:

First create a new object named funcRayCasttype through the vtkVolumeRayCastCompositeFunction class. We then set the function to do the first pixel classification before interpolation via the SetCompositeMethodToClassifyFirst method. This keeps the labels relatively even, but if we decide to take another SetCompositeMethodToInterpolateFirst approach to the method, the different labels will be broken and we'll get messed up rendering.

funcRayCast = vtk.vtkVolumeRayCastCompositeFunction()
funcRayCast.SetCompositeMethodToClassifyFirst()

Next, we need to create a volume mapper. First, a new object named mapperVolume is created through the vtkVolumeRayCastMapper class, and the newly created ray casting function funcRayCast is provided to mapperVolume through the SetVolumeRayCastFunction method. Finally, we provide the actual image data being rendered to the mapperVolume, which is stored in the object imdataBrainSeg.

mapperVolume = vtk.vtkVolumeRayCastMapper()
mapperVolume.SetVolumeRayCastFunction(funcRayCast)
mapperVolume.SetInput(imdataBrainSeg)

In the third step, we create a vtkVolume object named actorVolume, which is equivalent to a vtkActor but used for volume data. We map the mapper to mapperVolume and set the property to the vtkVolumeProperty object propVolume we created during preparation.

actorVolume = vtk.vtkVolume()
actorVolume.SetMapper(mapperVolume)
actorVolume.SetProperty(propVolume)

Finally, we just complete the pre-rendering action. We create a new renderer through the helper function createDummyRenderer defined at the beginning and add actorVolume.

renderer = createDummyRenderer()
renderer.AddActor(actorVolume)

implement

vtk_show(renderer, 800, 800)

The output effect is as follows:
Insert image description here
(2) Clipping Plane
Because the data set is too organized and the brain gyri are mixed with each other, the picture above is not particularly clear. The following is optimized by cropping.
The vtkVolumeRayCastMapper class as well as the volume mapper in VTK support clipping. In order to use clipping, you first need to create a plane with the appropriate position and orientation:

_origin = l2n(imdataBrainSeg.GetOrigin())
_spacing = l2n(imdataBrainSeg.GetSpacing())
_dims = l2n(imdataBrainSeg.GetDimensions())
_center = n2l(_origin+_spacing*(_dims/2.0))

Use the appropriate methods of vtkImageData to retrieve the origin coordinates, spacing and dimensions of the image data, and use the l2n helper function to quickly convert these lists into numpy.ndarray objects to allow us to do some math with these numbers. Then, the center coordinates of the image data are calculated and stored under _center.

Then all we need to do is create a new vtkPlane, set its origin to the center of the image data, and set its normal to the negative Z axis.

planeClip = vtk.vtkPlane()
planeClip.SetOrigin(_center)
planeClip.SetNormal(0.0, 0.0, -1.0)

Next the code for the vtkVolumeRayCastMapper procedure is repeated, the only difference being that the newly created clipping plane is added to the volume mapper:

mapperVolume.AddClippingPlane(planeClip)

implement

vtk_show(renderer, 800, 800)

The output effect is as shown below:
Insert image description here
(3) vtkVolumeTextureMapper2D
vtkVolumeTextureMapper2D is another volume renderer that uses texture-based volume rendering that occurs entirely on the GPU side and generates more beautiful renderings in a short time.
The only difference between the code below and our previous one is that mapperVolume is now vtkVolumeTextureMapper2D instead of vtkVolumeRayCastMapper. Additionally, we don't define any raycasting functions, since that's not how this class operates. Apart from these differences, the rest of the code is the same as before.

mapperVolume = vtk.vtkVolumeTextureMapper2D()
mapperVolume.SetInput(imdataBrainSeg)
mapperVolume.AddClippingPlane(planeClip)

actorVolume = vtk.vtkVolume()
actorVolume.SetMapper(mapperVolume)
actorVolume.SetProperty(propVolume)

renderer = createDummyRenderer()
renderer.AddActor(actorVolume)

implement

vtk_show(renderer, 800, 800)

The output effect is as follows:
Insert image description here

Guess you like

Origin blog.csdn.net/Luo_LA/article/details/128478610