Simpler alternatives to information theoretic similarity metrics for multimodal image alignment
Mutual information (MI) based methods for image registration enjoy great experimental success and are becoming widely used. However, they impose a large computational burden that limits their use; many applications would benefit from a reduction of the computational load. Although the theoretical justification for these methods draws upon the stochastic concept of mutual information, in practice, such methods actually seek the best alignment by maximizing a number that is (deterministically) computed from the two images. These methods thus optimize a fixed function, the "similarity metric," over different candidate alignments of the two images. Accordingly, we study the important features of the computationally complex MI similarity metric with the goal of distilling them into simpler surrogate functions that are easier to compute. More precisely, we show that maximizing the MI similarity metric is equivalent to minimizing a certain distance metric between equivalence classes of images, where images f and g are said to be equivalent if there exists a bijection ø such that f (x) = ø(g (x)) for all x. We then show how to design new similarity metrics for image alignment with this property. Although we preserve only this aspect of MI, our new metrics show equal alignment accuracy and similar robustness to noise, while significantly decreasing computation time. We conclude that even the few properties of MI preserved by our method suffice for accurate registration and may in fact be responsible for MI's success. ©2006 IEEE.