Joint feature point detection and matching in multimodal images PROJECT TITLE : Joint detection and matching of feature points in multimodal images ABSTRACT: In this work, we propose a novel architecture for Convolutional Neural Networks (CNNs) for the joint detection and matching of feature points in images acquired by various sensors using a single forward pass. This architecture can be found here. In contrast to traditional methods (such as SIFT and others), in which the detection phase comes before and is distinct from the process of computing the descriptor, the resultant feature detector is tightly coupled with the feature descriptor. Our strategy makes use of two CNN subnetworks: the first is a Siamese CNN, and the second is comprised of dual CNNs that do not share weight with one another. This makes it possible to process the joint and disjoint cues in the multimodal image patches simultaneously and to fuse them together. It has been demonstrated through experimentation that the proposed method achieves better results than the current state-of-the-art schemes when applied to multiple different datasets of multimodal images. It is also demonstrated to provide repeatable feature point detections across images acquired with multiple sensors, outperforming detectors that are considered to be state of the art. To the best of our knowledge, this is the first unified method that has been developed for the detection and matching of images of this kind. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Deformable Image Registration: Modules, Bilevel Training, and Beyond for Learning Intelligent Vehicle Internet of Vehicles Traffic Accident Prediction Model Using Deep Learning