Abstract Introduction: CTC enumeration in blood samples with CELLSEARCH® is a prognostic biomarker in metastatic breast, prostate and colorectal cancer (full intended use documents.cellsearchctc.com). Currently, CTC identification is performed by visual assessment, which is time-consuming and potentially affected by subjective interpretations. Recently, we developed a Deep Learning (DL) algorithm for automated CTC identification in CELLSEARCH images (CellFind®, Menarini Silicon Biosystems, research use only). This study aims to assess the CTC identification performance of CellFind on 2 different datasets: the first for comparison with human reviewers, and the second for validation with much higher number of images. Methods: CellFind is composed of an image segmentation followed by a classification network, which was trained using 13067 CTC and 52890 non-CTC images from 215 breast, 123 prostate and 180 colorectal cancer samples. The performance of CellFind was tested on 2 separate datasets. 1) The first dataset was made of CELLSEARCH gallery images from 26 breast cancer, 68 prostate cancer, and 40 benign samples. 8 human reviewers, qualified for CTC image analysis, performed blind labeling. The Ground Truth (GT) was generated by the classification of the 3 most experienced reviewers by majority voting (1621 CTCs out of 17080 images). Accuracy and F1 metrics were used to rank the performance of DL and 5 reviewers against the GT. 2) The second dataset was made of CELLSEARCH gallery images from 63 breast, 66 prostate and 32 colorectal cancer samples. 3 experienced reviewers created the GT classification by majority voting, identifying 20052 CTCs out of 117447 images.Finally, each patient was assigned to the favorable or unfavorable group by applying the validated cutoff on CTC enumeration per sample (CTC≥5 for breast and prostate cancer, CTC≥3 for colorectal cancer). Results: 1) On 17080 gallery images from 134 samples, CellFind reached top human-level performance, as ranked both by accuracy (97.8% for DL vs. 96.8-98.0%) and F1 (88.6% for DL vs. 83.3-89.9% for reviewers). On the 94 cancer samples, DL accuracy on favorable/unfavorable prognosis was 98.9% vs. 95.8-97.9% for reviewers. 2) On the bigger dataset with 117447 gallery images from 161 cancer samples, CellFind obtained accuracy = 96.0% and F1 = 87.8% for CTC identification (TP = 17039, FP = 1735, FN = 3013, TN = 95660). For favorable/unfavorable prognosis the DL accuracy was 95.4%. Conclusion: Automated identification and enumeration of CTCs in CELLSEARCH images with CellFind can remove human subjectivity from the review process and maximize standardization among different research centers. CellFind performed better than most operators and reduced the data processing time required for each blood sample by the operator. Citation Format: Luca Biasiolli, Pietro Ansaloni, Nicolò Gentili, Daniele Giardiello, Francesco Montanari, Ramona Miserendino, Giulio Signorini, Gianni Medoro. Automated identification and enumeration of CELLSEARCH Circulating Tumor Cells (CTC) with a deep learning algorithm [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 7492.