Abstract

We present a new technique for fusing together an arbitrary number of aligned images into a single color or intensity image. We approach this fusion problem from the context of Multidimensional Scaling (MDS) and describe an algorithm that preserves the relative distances between pairs of pixel values in the input (vectors of measurements) as perceived differences in a color image. The two main advantages of our approach over existing techniques are that it can incorporate user constraints into the mapping process and allows adaptively compressing or exaggerating features in the input in order to make better use of the output's limited dynamic range. We demonstrate these benefits by showing applications in various scientific domains and comparing our algorithm to previously proposed techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.