Getting Houndsfield or Gray Value data from an image stack

I am currently attempting to access the HU data directly from python. I found the get_voxel_buffer() method in the mimics.ImageData class, which returns a multi-dimensional memoryview of gray values. I planned to convert these to HU next, however I found that the returned object was quite difficult to convert to a convenient form (numpy array would be preferred). I thought I’d ask the group while I continued to try and solve the problem on my end. I can return a 1D list of integers, but making an array that allows for fast processing keeps eluding me. To be clear, this is the code I’m running to get to a list.

import mimics
import numpy as np

images = mimics.data.images.get_active()
buffer = images.get_voxel_buffer()
casted = buffer.cast('b').cast('i')
lst = casted.tolist()

I think part of the problem is that the obj attribute of buffer is None. I’m guessing that hinders my ability to easily access that data. I’ve tried everything I could think of and most everything I could find online. This iteration process using grouper technically works, it just takes a long, long, long, long time.

import mimics
import numpy as np
from itertools import zip_longest

use_actual_array = False
images = mimics.data.images.get_active()
buffer = images.get_voxel_buffer()
flat_list = buffer.cast('b').cast(buffer.format)
test_arr = np.empty(shape=buffer.shape)

def grouper(n, iterable, fillvalue=None):
    "Collect data into fixed-length chunks or blocks"
    # grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx
    args = [iter(iterable)] * n
    return zip_longest(fillvalue=fillvalue, *args)

if use_actual_array:
    arr = buffer
else:
    arr = np.array(range(2*3*4)).reshape((4, 3, 2))
    flat_list = arr.flatten().tolist()
    test_arr = np.zeros_like(arr)

for row_idx, row in enumerate(grouper(arr.shape[-1], flat_list)):
    test_arr[row_idx // arr.shape[1],row_idx % arr.shape[1]] = row

Hello Nathaniel,

You can easily transform the memoryview into a numpy array using the function asarray from numpy.

Here is the example of the code I use:

import numpy as np

im = mimics.data.images["fixed"]  #selecting the image
mem_view = im.get_voxel_buffer()  #get the buffer in the memory view
buffer = np.asarray(mem_view)   #transform the memory view in a numpy array

print(buffer)

Regarding the conversion, we have the build-in mimics functions mimics.segment.GV2HU() and mimics.segment.HU2GV() that you can use to make the conversion between HU and Gray Value if needed.

Hope this helps.

Clément

I had planned on using the conversion functions once I got to that point.

I tried using np.asarray (first thing I tried yesterday) and I get this error:
I’m using numpy version 1.18.4 with python 3.7.6.

arr = np.asarray(buffer)
Traceback (most recent call last):
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-92-3c9f6e397906>", line 1, in <module>
    arr = np.asarray(buffer)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\numpy\core\_asarray.py", line 85, in asarray
    return array(a, dtype, copy=False, order=order)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\netref.py", line 220, in method
    return syncreq(_self, consts.HANDLE_CALLATTR, name, args, kwargs)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\netref.py", line 75, in syncreq
    return conn.sync_request(handler, proxy, *args)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 471, in sync_request
    return self.async_request(handler, *args, timeout=timeout).value
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\async_.py", line 97, in value
    raise self._obj
_get_exception_class.<locals>.Derived: multi-dimensional sub-views are not implemented
========= Remote Traceback (1) =========
Traceback (most recent call last):
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 329, in _dispatch_request
    res = self._HANDLERS[handler](self, *args)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 603, in _handle_callattr
    return self._handle_call(obj, args, kwargs)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 590, in _handle_call
    return obj(*args, **dict(kwargs))
NotImplementedError: multi-dimensional sub-views are not implemented

I am currently looking at updating rpyc to see if that helps. I’m using 4.0.2 and the latest version is 4.1.5

Well, I just learned about the new instructions to install the mimics wheel, per your python installation instructions. I upgraded that to mimics 23 and upgraded rpyc to 4.1.5, and I’m still getting the same error.

Hey Nathaniel,

This issue might be linked to Conda and/or the use of the Script Listener. Are your using the Mimics Script Listener to run your code ?
(In general the Script Listener has difficulties working with large objects like pixel buffer)

I can reproduce the same error by running my code from an external IDE (with the Script Listener) while it works if ran directly from the Mimics Editor. That might do the trick for now.

We are going to investiguate this further.

Regards,

Clément

I have been using the Script Listener to run from Pycharm. I made myself a work-around macro to save the data as a .npy file from mimics without the listener, allowing me to import it quickly later from anywhere.

I can send y’all a YAML file with my conda environment, if you would like! I can’t upload it here, but I suppose I could just paste it in as text.

I’m encountering this same error when I try and get the points and triangles using part.get_triangles().

import numpy as np

points, tris = part.get_triangles()
points_arr = np.asarray(points)

Traceback (most recent call last):
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\IPython\core\interactiveshell.py", line 3343, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-76-a25dd9064df0>", line 1, in <module>
    np.asarray(points)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\numpy\core\_asarray.py", line 85, in asarray
    return array(a, dtype, copy=False, order=order)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\netref.py", line 274, in method
    return syncreq(_self, consts.HANDLE_CALLATTR, name, args, kwargs)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\netref.py", line 76, in syncreq
    return conn.sync_request(handler, proxy, *args)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 469, in sync_request
    return self.async_request(handler, *args, timeout=timeout).value
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\async_.py", line 102, in value
    raise self._obj
_get_exception_class.<locals>.Derived: multi-dimensional sub-views are not implemented
========= Remote Traceback (1) =========
Traceback (most recent call last):
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 320, in _dispatch_request
    res = self._HANDLERS[handler](self, *args)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 619, in _handle_callattr
    return self._handle_call(obj, args, kwargs)
  File "C:\Users\Npyle1\.conda\envs\three_mat\lib\site-packages\rpyc\core\protocol.py", line 593, in _handle_call
    return obj(*args, **dict(kwargs))
NotImplementedError: multi-dimensional sub-views are not implemented

Hello,

This is indeed similar as when you are trying to get the voxels from an image. The similar function np.asarray() is used.

We continue to look into this and see if we can make it work better from external IDE.

Regards,

Clément

1 Like

I’ll note that I am able to get points and triangles from an ssm/part in 3-matic.

Someone at Materialise asked me to post my workaround script so here goes:

I have this saved in it’s own .py file and run this script from mimics. IT DOES NOT work from an external IDE. I export to .npy files here for brevity.

import mimics

from tools import np_matic

np_matic.import_numpy_prep()
import numpy as np
from pathlib import Path

def get_active_image_hu_data():
    imgs = mimics.data.images.get_active()
    # dicom_tags = {}
    dicom_tags = imgs.get_dicom_tags()
    for tag in dicom_tags.values():
        if tag.description == 'Pixel Spacing':
            pixel_spacing = np.array(
                [float(val) for val in tag.value.split('\\')])  # center-center distance of pixels
        elif tag.description == 'Slice Thickness':
            slice_thickness = float(tag.value)  # vertical center-center distance of pixels
        elif tag.description == 'Image Orientation (Patient)':
            orientation = tag.value.split('\\')
            row_cosine = np.array(orientation[:3], dtype=np.float)
            col_cosine = np.array(orientation[-3:], dtype=np.float)
        elif tag.description == 'Rescale Slope':
            rescale_slope = int(tag.value)
            print(rescale_slope)
        elif tag.description == 'Rescale Intercept':
            rescale_intercept = int(tag.value)
            print(rescale_intercept)
        else:
            pass
    try:
        img_gv = np.asarray(imgs.get_voxel_buffer(), dtype=np.uint16)
    except (MemoryError, NotImplementedError):
        raise Warning('This script will only work inside Mimics due to issues with their memory buffer')
    else:
        logger.info('3D Image Array exported ')
        img_hu = img_gv + rescale_intercept / rescale_slope

    origin = imgs.get_voxel_center([0, 0, 0])
    return img_hu, (pixel_spacing, slice_thickness, row_cosine, col_cosine, origin)


hu_data, spatial_data = mimics_tools.get_active_image_hu_data()
# cortical_mask = np.asarray(mimics.data.masks.find('Cortical Section').get_voxel_buffer(), dtype=np.bool_)
# print(cortical_mask.shape)
# print(cortical_mask.sum())
# cortical = hu_data[cortical_mask]
project_path = Path(mimics.file.get_project_information().project_path)
output_path = project_path.parent / (project_path.stem + '_HU.npy')
np.save(output_path, hu_data)
print(f'HU array saved')

Here is np_matic. This helps import numpy when you’re in mimics and using conda. May not be strictly necessary anymore as I haven’t tried running without it in a few years.

import os
import sys

def import_numpy_prep():
    env_p = sys.prefix  # path to the env
    print("Env. path: {}".format(env_p))

    new_p = ''
    for extra_p in (r"Library\mingw-w64\bin",
        r"Library\usr\bin",
        r"Library\bin",
        r"Scripts",
        r"bin"):
        new_p +=  os.path.join(env_p, extra_p) + ';'

    os.environ["PATH"] = new_p + os.environ["PATH"]  # set it for Python
    os.putenv("PATH", os.environ["PATH"])  # push it at the OS level

if __name__ == '__main__':
    import_numpy_prep()

1 Like