Emotion AI is increasingly used to automatically evaluate asynchronous hiring interviews. Although touted for increasing hiring fit and reducing bias, it is unclear how job-seekers perceive emotion AI-enabled asynchronous interviews. This gap is striking, given job-seekers' marginalized position in hiring and how job-seekers with marginalized identities may be particularly vulnerable to this technology's potential harms. Addressing this gap, we conducted exploratory interviews with 14 U.S.-based participants with direct, recent experience with emotion AI-enabled asynchronous interviews. While participants acknowledged the asynchronous, virtual modality's potential benefits to employers and job-seekers, they perceived harms to job-seekers associated with automatic emotion inferences that our analysis maps to distributive, procedural, and interactional injustices. We find that social identity can inform job-seekers' perceptions of emotion AI, extending prior work's understandings of the factors contributing to job-seekers' perceptions of AI (broadly) in hiring. Moreover, our results suggest that emotion AI use may reconfigure demands for emotional labor in hiring and that deploying this technology in its current state may unjustly risk harmful outcomes for job-seekers - or, at the very least, perceptions thereof, which shape behaviors and attitudes. Accordingly, we recommend against the present adoption of emotion AI in hiring, identifying opportunities for the design of future asynchronous hiring interview platforms to be meaningfully transparent, contestable, and privacy-preserving. We emphasize that only a subset of perceived harms we surface may be alleviated by these efforts; some injustices may only be resolved by removing emotion AI-enabled features.