Abstract

In our paper, we present an extension of text embedding architectures for grayscale medical image classification. We introduce a mechanism that combines n-gram features with an efficient pixel flattening technique to preserve spatial information during feature representation generation. Our approach involves flattening all pixels in grayscale medical images using a combination of column-wise, row-wise, diagonal-wise, and anti-diagonal-wise orders. This ensures that spatial dependencies are captured effectively in the feature representations. To evaluate the effectiveness of our method, we conducted a benchmark using 5 grayscale medical image datasets of varying sizes and complexities. 10-fold cross-validation showed that our approach achieved test accuracy score of 99.92 % on the Medical MNIST dataset, 90.06 % on the Chest X-ray Pneumonia dataset, 96.94 % on the Curated Covid CT dataset, 79.11 % on the MIAS dataset and 93.17 % on the Ultrasound dataset. The framework and reproducible code can be found on GitHub at https://github.com/xizhou/pixel_embedding.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.