Abstract
This research investigates emotion recognition using a unique method that integrates computer vision and Wi-Fi signals. By monitoring the Wi-Fi patterns and analyzing the differences in it, and employing vision technology to capture facial expressions, we want to differentiate and understand human emotions in various contexts. Given the multimodal nature of human emotion expressions, we suggest this method that combines vision and Wi-Fi configurations. Additionally, we use the link between modalities for improved emotion recognition by employing the Multi Source Learning (MSL) approach, which was motivated by Multi-Task Learning. Key Words—emotion recognition, channel state information, vision, dataset, multi-source learning
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have