Abstract

With the advent of AI, the internet of things (IoT) and human-centric computing (HCC), the world has witnessed a rapid proliferation of smart homes (SH). However, implementing a robust security system for residents of SH remains a daunting task. The existing smart homes incorporate security provisions such as biometric verification, activity tracking, and facial recognition. Integrating multi-sensor devices, networking systems and data storage facilities escalate the lifecycle costs of these systems. Facial emotions convey important cues on behaviour and intent that can be used as non-invasive feedback for contextual threat analysis. The early mitigation of a hostile situation, such as a fight or an attempted intrusion, is vital for the SH residents’ safety. This research proposes a real-time facial emotion-based security framework called iSecureHome for smart homes using a CMOS camera, which is triggered by a passive infrared (PIR) motion sensor. The impact of chromatic and achromatic features on facial Emotion Recognition (ER), as well as skin colour-based biases in current ER algorithms, are also investigated. A time-bound facial emotion decoding strategy is presented in iSecureHome that is based on EmoFusioNet—a deep fusion-based model—to predict the security concerns in the vicinity of a given residence. EmoFusioNet utilises stacked and late fusion methodologies to ensure a colour-neutral and equitable ER system. Initially, the stacked model synchronously extracts the chromatic and achromatic facial features using deep CNNs, and their predictions are then fed into the late fusion component. After that, a regularised multi-layer perceptron (R-MLP) is trained to fuse the results of stacked CNNs and generate final predictions. Experimental results suggest that the proposed fusion methodology augments the ER model and achieves the final train and test accuracy of 98.48% and 98.43%, respectively. iSecureHome also comprises a multi-threaded decision-making framework for threat analysis with efficient performance and minimal latency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.