A Brain-Computer Music Interface (BCMI) system may be designed to harness electroencephalography (EEG) signals for control over musical outputs in the context of emotionally expressive performance. To develop a real-time BCMI system, accurate and computationally efficient emotional biomarkers should first be identified. In the current study, we evaluated the ability of various features to discriminate between emotions expressed during music performance with the aim of developing a BCMI system. EEG data was recorded while subjects performed simple piano music with contrasting emotional cues and rated their success in communicating the intended emotion. Power spectra and connectivity features (Magnitude Square Coherence (MSC) and Granger Causality (GC)) were extracted from the signals. Two different approaches of feature selection were used to assess the contribution of neutral baselines in detection accuracies; 1- utilizing the baselines to normalize the features, 2- not taking them into account (non-normalized features). Finally, the Support Vector Machine (SVM) has been used to evaluate and compare the capability of various features for emotion detection. Best detection accuracies were obtained from the non-normalized MSC-based features equal to 85.57 ± 2.34, 84.93 ± 1.67, and 87.16 ± 0.55 for arousal, valence, and emotional conditions respectively, while the power-based features had the lowest accuracies. Both connectivity features show acceptable accuracy while requiring short processing time and thus are potential candidates for the development of a real-time BCMI system.