Image registration using scikit image

In this post I am going to show a very basic example of image registration. I am using medical instead of astronomical images just for fun.

Data

The initial image corresponds to Human HT29 cells which are by nature fairly smooth and elliptical. Since I do not have two independent obtained images from the same sample, I am going to use just the one image. Let image1 be the original image and construct image2 applying a translation of 20 pixels in both axis, rotation of 10 degrees, and scale of 1.1. Let’s consider these two the images that we want to register to a common reference frame.

# Read full image
image_ = plt.imread('human_cells.tif')

# Construct wo images with a slight offset between them
image1 = img_as_float(image_[0:400,0:400])
image2 = img_as_float(image_[20:420,20:420])

# In addition, we apply some rotation and scale
# to the second image
image2 = rotate(image2, 10, cval=-1, order=3)
image2 = rescale(image2, 1.1, cval=-1,
                 mode='constant', preserve_range=True, order=3)

# Final crop to a common area
image1 = image1[50:350,50:350]   # Original image
image2 = image2[50:350,50:350]   # Transformed image
Original image (left) and transformed image (right) made by applying a translation, rotation and scale.

Original image (left) and transformed image (right) made by applying a translation, rotation and scale.

If we have a closer look at the images, we can see that their quality is different (in practice, what it means is that the rotation and scale operations have introduced some smearing and noise).

Detail from the original and transformed images.

Detail from the original and transformed images.

How we deal with this depends. If we have many images to register and stack, we may want to leave them as they are and then in the stacking assign weights to every image so that better quality images weight more in the final stack. Otherwise we can degrade the original image to the quality of the worse image in an operation that I will call PSF matching.

Match features

First thing we need to do is detect and extract features in both images. This is a crucial step that can be done in several ways and normally needs to be adapted to the type of images we are dealing with. For this example we use skimage.feature.ORB which seems to be a good general purpose algorithm. Once detected we match the features so that we have pairs of correspondent points in both images.

from skimage.feature import ORB, match_descriptors, plot_matches

orb = ORB(n_keypoints=500, fast_threshold=0.05)
orb.detect_and_extract(image1)
keypoints1 = orb.keypoints
descriptors1 = orb.descriptors

orb.detect_and_extract(image2)
keypoints2 = orb.keypoints
descriptors2 = orb.descriptors

matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)

fig, ax = plt.subplots(1, 1, figsize=(10, 10))
plot_matches(ax, image1, image2, keypoints1, keypoints2, matches12, only_matches=True)
plt.axis('off')
plt.show()
Intial matches found between the two images.

Intial matches found between the two images.

A few matches are not correct. We use the RANSAC (RANdom SAmple Consensus) algorithm to do a bit of cleaning.

RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set.

from skimage.transform import ProjectiveTransform, SimilarityTransform
from skimage.measure import ransac
from skimage.feature import plot_matches

# Select keypoints from the source (image to be registered)
# and target (reference image)
src = keypoints2[matches12[:, 1]][:, ::-1]
dst = keypoints1[matches12[:, 0]][:, ::-1]

model_robust, inliers = ransac((src, dst), SimilarityTransform,
                               min_samples=10, residual_threshold=1, max_trials=300)
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
plot_matches(ax, image1, image2, keypoints1, keypoints2, matches12[inliers])
ax.axis('off')
plt.show()
Matches after being cleaned using RANSAC.

Matches after being cleaned using RANSAC.

Warping

Warping is the process in which we use the model derived in the step above to register both images so that they share the same reference frame. In this case I am going to warp image2 into the image1 reference.

from skimage.transform import warp


image1_ = image1
output_shape = image1.shape

image2_ = warp(image2, model_robust.inverse, preserve_range=True,
               output_shape=output_shape, cval=-1)

image2_ = np.ma.array(image2_, mask=image2_==-1)

Both images, original and warped can be seen below:

Images

Together with a detail:

Images

This last image shows the ratio between the original image and the warped one:

Images

Related