Abstract Background Left ventricular longitudinal strain has been reported to deliver reproducibility, sensitivity and prognostic value over and above ejection fraction. However, it currently relies on uninspectable proprietary algorithms and suffers from a lack of widespread clinical use. Uptake may be improved by increasing user trust through greater transparency. Purpose We therefore developed a machine-learning based method, trained, and validated with accredited experts from our AI Echocardiography Collaborative. We make the dataset, code, and trained network freely available under an open-source license. Methods AI enables strain to be calculated without relying on speckle tracking by directly locating key points and borders across frames. Strain can then be calculated as the fractional shortening of the left ventricular perimeter. We first curated a dataset of 7523 images, including 2587 apical four chamber, each labelled by a single expert from our collaboration of 17 hospitals, using our online platform (Figure 1). Using both this dataset and a semi-supervised approach, we trained a 3d convolutional neural network to identify the annulus, apex, and the endocardial border throughout the cardiac cycle. Separately, we constructed an external validation dataset of 100 apical 4 chamber video-loops. The systolic and diastolic frame were identified, and each image was separately labelled by 11 experts. From these labels we then derived the expert consensus strain for each of the 100 video loops. These experts also ordered all 100 echocardiograms by their visual grading of left ventricular longitudinal function. Finally, a single expert calculated strain using two different proprietary commercial packages (A and B). Results Consensus strain measurements (obtained by averaging individual assessments by the 11 experts) across the 100 cases ranged from −4% to −27%, with strong correlations with the individual experts and machine methods (Figure 2). Using each cases' consensus across experts as the gold standard, median error from consensus was 3.1% for individual experts, 3.4% for Propriety A, 2.6% for Proprietary B, 2.6% for our AI. Using the visual grading of longitudinal strain as the reference, the 11 individual experts and 4 machine methods each showed significant correlation: coefficients ranged from 0.55 to 0.69 for experts, and for Proprietary A was 0.68, Proprietary B 0.69, and our AI 0.69. Conclusions Our open-source, vendor-independent AI-based strain measure automatically produces values that agree with expert consensus, as strongly as the individual experts do. It also agrees with the subjective visual ranking by longitudinal function. Our open-source AI strain performs at least as well as closed-source speckle-based approaches, and may enable increased clinical and research use of longitudinal strain. Funding Acknowledgement Type of funding sources: Public grant(s) – National budget only. Main funding source(s): NIHR Imperial BRC ITMAT.Dr Howard was additionally funded by Wellcome. Figure 1. Collaborative online platformFigure 2. Correlations between strain methods