The most important issues in developing an emotion-aware dialog system include how to recognize users’ emotions and how machine responses can be generated accordingly. However, studies on emotional dialogs have primarily focused on whether the system can produce utterances with suitable emotions; consequently, the content relevance and quality of the system’s responses have been ignored. Different from previous studies, we present a deep learning framework to explore the influence of emotions in dialogs. Our framework includes two modules (C-LSTM for emotion recognition and biLSTM-C for response generation), and it integrates both emotional and rational information to produce emotionally and semantically correct responses. This structured design has several advantages: the modules of emotion recognition and response generation can be constructed by any effective methods, the personalized dialog can be achieved by adjusting the emotion–response coupling mechanism to adapt to users’ conversational styles, and the results can be transparent and interpretable to users. Following the presented approach, we first assess the performance of model training for emotion recognition and response generation. With the learned models, we configure a series of experiments to investigate the effect of using emotion as a driving force to generate machine responses in length by designing evaluation strategies from different perspectives. Experiments and results highlight the importance and influence of emotions in dialogs.