As humans we want to interact with a machine as we would with a person, in a way that it understands us, advises us, and looks after us with no human supervision. Current systems lack empathy and user understanding in spite of very effective logical reasoning. By predicting the emotions of the users, we are able to identify their needs and cater to them as best as possible. Emotion recognition in video and audio has many potential applications, including conversational agents, recommendation systems as well as systems for smart homes, mental illness care, virtual reality games, remote physical training, education and car-hailing services. The aim of the project is to develop an automatic emotion detection system based on voice and facial expression. We propose a model that highlights contextual, multimodal information for emotion detection and recognition. If systems can understand emotions and respond accordingly to behavioral patterns, we can anticipate artificial agents becoming our cognitive-consulting partners in our daily lives. Additionally, the project can be expanded to make interactions more natural and better suited to handle complex situations.