Fine-grained error span detection is a sub-task within quality estimation that aims to identify and assess the spans and severity of errors present in translated sentences. In prior quality estimation, the focus has predominantly been on evaluating translations at the sentence and word levels. However, such an approach fails to recognize the severity of specific segments within translated sentences. To the best of our knowledge, this is the first study that concentrates on enhancing models for this fine-grained error span detection task in machine translation. This study introduces a framework that sequentially performs sentence-level error detection, word-level error span extraction, and severity assessment. We present a detailed analysis for each of the methodologies we propose, substantiating the effectiveness of our system, focusing on two language pairs: English-to-German and Chinese-to-English. Our results suggest that task granularity enhances performance and that a prompt-based fine-tuning approach can offer optimal performance in the classification tasks. Furthermore, we demonstrate that employing a large language model to edit the fine-tuned model’s output constitutes a top strategy for achieving robust quality estimation performance.