Abstract Background/Objectives: The determination of HER2-positivity by IHC or FISH is critical for identifying patients most likely to benefit from anti-HER2 therapy. However, these methods do not always provide an accurate indication of HER2 overactivity, which can occur without gene amplification or overexpression of the HER2 protein. Our study objective was to determine if a deep learning convolutional neural network (CNN) could be trained, using IHC HER2 staining, to learn a morphological signature for HER2 positivity in H&E stained slides. Methods: For training, we used H&E images (whole slides scanned at 40x) from 10 HER+ patients (IHC 3+) and 15 HER2− patients (IHC 0 or 1+) along with their adjacent HER2 IHC images. We first annotated non-cancer regions in H&E images. We then identified tumor regions in HER2 IHC that were positive (intense/complete circumferential stain) or negative (no or weak stain). We ignored any regions with equivocal IHC response as well as whole slides of IHC 2+ patients. For rigorous testing, slides from a separate set of 7 HER2+ and 19 HER2− patients were used. Digitized slides and expert consensus IHC HER2 status for each patient were provided as part of an international HER2 IHC scoring competition organized by the University of Warwick. The computer vision pipeline comprised four-stages. First, we color-normalized the H&E images to reduce unwanted color variation between slides. Second, a pre-trained neural network (NN1) marked all nuclear centroids to make it easier for subsequent stages to focus on nuclear morphology and inter-nuclear spatial arrangements. Third, a neural network (NN2) trained on sub-images of size 100x100 centered at nuclear centroids classified nuclei as non-cancer vs. cancer. Fourth, a final neural network (NN3) sub-classified cancer nuclei into HER2+ or HER−. In H&E images of held-out test patients, the percent of cancer nuclei scored as HER2+ was analyzed. Results: NN2 had a classification accuracy exceeding 97% on a validation set of 25,000 nuclei set aside from training patients, while NN3 had a validation accuracy of 88% on 7,500 test nuclei. On the set aside test patients, among the seven HER2+ patients an average of 49.8% cancer nuclei were scored HER2+, while among the 19 HER2− test patients the corresponding proportion was only 24.7% (p < 0.01). The AUC of binary classification of test patients into HER2+/− based on percent of HER2+ nuclei was 0.815. Upon closer inspection, the H&E morphology of some of the misclassified HER− patients showed visual similarity to the correctly classified HER2− patients and vice versa. Conclusions: Even when morphological patterns associated with cancer subtypes are too subtle for humans to reliably detect, H&E stained slides analyzed by CNNs may be able to geographically map a HER2 signature. It is unclear if misclassification with respect to IHC status is a reflection of morphological confusion or discordance between genomic subtype and IHC response. By training multiple neural networks to detect morphological signatures corresponding to different molecular subtypes of breast cancer, we may be able to detect and study intra-tumor heterogeneity in a cost effective and/or complementary way compared to multi-region sequencing. Citation Format: Dhage S, Anand D, Kumar N, Gann PH, Sethi A. Computer vision detects morphological correlates of HER2 positive breast cancer in H&E stained histological images [abstract]. In: Proceedings of the 2018 San Antonio Breast Cancer Symposium; 2018 Dec 4-8; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2019;79(4 Suppl):Abstract nr P4-02-11.