Abstract
A new framework 1 for fusion of 2-D images based on their multiscale edges is described in this paper. The new method uses the multiscale edge representation of images proposed by Mallat and Hwang. The input images are fused using their multiscale edges only. Two different algorithms for fusing the point representations and the chain representations of the multiscale edges (wavelet transform modulus maxima) are given. The chain representation has been found to provide numerous new alternatives for image fusion, since edge graph fusion techniques can be employed to combine the images. The new framework encompasses different levels, i.e. pixel and feature levels, of image fusion in the wavelet domain.