Abstract

Spaceflight-associated neuro-ocular syndrome (SANS) is a collection of neuro-ophthalmic findings that occurs in astronauts as a result of prolonged microgravity exposure in space. Due to limited resources on board long-term spaceflight missions, early disease diagnosis and prognosis of SANS become unviable. Moreover, the current retinal imaging techniques onboard the international space station (ISS), such as optical coherence tomography (OCT), ultrasound imaging, and fundus photography, require an expert to distinguish between SANS and similar ophthalmic diseases. With the advent of Deep Learning, diagnosing diseases (such as diabetic retinopathy) from structural retinal images are being automated. In this study, we propose a lightweight convolutional neural network incorporating an EfficientNet encoder for detecting SANS from OCT images. We used 6303 OCT B-scan images for training/validation (80%/20% split) and 945 for testing. Our model achieved 84.2% accuracy on the test set, i.e., 85.6% specificity, and 82.8% sensitivity. Moreover, it outperforms two other state-of-the-art pre-trained architectures, ResNet50-v2 and MobileNet-v2, by 21.4% and 13.1%. Additionally, we use GRAD-CAM to visualize activation maps of intermediate layers to test the interpretability of our model's prediction. The proposed architecture enables fast and efficient prediction of SANS-like conditions for future long-term spaceflight mission in which computational and clinical resources are limited.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call