Abstract

The US Department of Veterans Affairs has been acquiring store and forward digital diabetic retinopathy surveillance retinal fundus images for remote reading since 2007. There are 900+ retinal cameras at 756 acquisition sites. These images are manually read remotely at 134 sites. A total of 2.1 million studies have been performed in the teleretinal imaging program. The human workload for reading images is rapidly growing. It would be ideal to develop an automated computer algorithm that detects multiple eye diseases as this would help standardize interpretations and improve efficiency of the image readers. Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs have been developed and there are needs for additional image data to validate this work. To further this research, the Atlanta VA Health Care System (VAHCS) has extracted 112,000 DICOM diabetic retinopathy surveillance images (13,000 studies) that can be subsequently used for the validation of automated algorithms. An extensive amount of associated clinical information was added to the DICOM header of each exported image to facilitate correlation of the image with the patient's medical condition. The clinical information was saved as a JSON object and stored in a single Unlimited Text (VR = UT) DICOM data element. This paper describes the methodology used for this project and the results of applying this methodology.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.