Abstract
We propose to develop image fusion algorithms and architecture for enhanced deep learning and analysis of large sets of data. Usually, images captured from different perspectives, using different types of sensors, different frequencies, etc. must be considered separately and interpreted by human operators. Using image fusion techniques, different forms of sensor information into a single data feed for a neural network to interpret and learn from can be implemented. This will increase the accuracy of neural network classification, as well as improve effectiveness in situations involving suboptimal conditions, such as obstructed or malfunctioning sensors. Another disadvantage of current deep learning technique is that they often require massive datasets to train to an acceptable level of accuracy, especially when situations involve potentially thousands of classification categories. Increasing the size of the dataset exponentially increases the amount of time to train, even when training on relatively simple neural network architectures. In protection scenarios, where new classes of threats can emerge frequently, it is unacceptable to have to take down the security system for long periods of time and train it to identify new threats.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Engineering and Advanced Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.