Sort by
Seroprevalencia de enfermedad celíaca en sujetos con síntomas dispépticos. Un estudio en población mexicana

Introduction and aimsCeliac disease (CD) is an autoimmune enteropathy that develops in genetically susceptible individuals. The typical gastrointestinal manifestation is diarrhea but symptoms of dyspepsia, such as epigastric pain, nausea, or satiety, can sometimes appear. Previous studies have reported that the prevalence of CD in patients with dyspepsia can be as high as 7%. The aim of the present study was to evaluate CD seroprevalence in subjects with dyspeptic symptoms and a control group in a Mexican population. Material and methodsA case-control study was conducted on blood donors that answered the PAGI-SYM questionnaire for dyspepsia and in whom IgA antibodies to tissue transglutaminase 2 (IgA anti-tTG2) and IgG antibodies to deamidated gliadin peptide (IgG anti-DGP) were determined. CD seroprevalence in subjects with dyspeptic symptoms and in asymptomatic subjects was compared. ResultsA total of 427 subjects (76.3% men), with a mean patient age of 34 years (range of 18-65 years) were included. Of those participants, 87 (20.3%) had symptoms of dyspepsia (group A) and 340 (79.6%) were asymptomatic (group B). Antibodies were positive in one (1.15%) of the group A subjects (1/87, 95% CI 0.2-6%), whereas they were positive in 4 (1.18%) of the group B subjects (4/340, 95% CI 0.4-2.9%, p = 0.59). ConclusionsCD seroprevalence in the study population with dyspeptic symptoms (1%) was not different from that of the control population. Thus, CD screening in Mexican patients with dyspepsia is not justified.

Open Access
Relevant
Transferring CNN With Adaptive Learning for Remote Sensing Scene Classification

Accurate classification of remote sensing (RS) images is perennial topic of interest in the RS community. Recently, transfer learning, especially for fine-tuning pre-trained convolutional neural networks (CNNs), has been proposed as a feasible strategy for RS scene classification. However, because the target domain (i.e., the RS images) and the source domain (e.g., ImageNet) are quite different, simply using the model pre-trained on an ImageNet dataset presents some difficulties. The RS images and the pre-trained models need to be properly adjusted to build a better classification system. In this study, an adaptive learning strategy for transferring a CNN-based model is proposed. First, an adaptive transform is used to adjust the original size of the RS image to a certain size, which is tailored to the input of the subsequent pre-trained model. Then, an adaptive transferring model is proposed to automatically learn what knowledge from the pre-trained model should be transferred to the RS scene classification model. Finally, in combination with a label smoothing approach, adaptive label is presented to generate soft labels based on the statistics of the classification model predictions for each category, which is beneficial for learning the relationships between the target and non-target categories of scenes. In general, the proposed methods adaptively manage the input, model, and label simultaneously, which leads to better classification performance for RS scene classification. The proposed methods are tested on three widely-used data sets and the obtained results show that the proposed methods provide competitive classification accuracy compared to the state-of-the-art methods.

Relevant
Semisupervised Hyperspectral Image Classification Using a Probabilistic Pseudo-Label Generation Framework

Deep neural networks (DNNs) show impressive performance for hyperspectral image (HSI) classification when abundant labeled samples are available. The problem is that HSI sample annotation is extremely costly and the budget for this task is usually limited. To reduce the reliance on labeled samples, deep semi-supervised learning (SSL), which jointly learns from labeled and unlabeled samples, has been introduced in the literature. However, learning robust and discriminative features from unlabeled data is a challenging task due to various noise effects and ambiguity of unlabeled samples. As a result, recent advances are constrained, mainly in the pre-training or warm-up stage. In this paper, we propose a deep probabilistic framework to generate reliable pseudo labels to explicitly learn discriminative features from unlabeled samples. The generated pseudo labels of our proposed framework can be fed to various DNNs to improve their generalization capacity. Our proposed framework takes only 10 labeled samples per class to represent the label set as an uncertainty-aware distribution in the latent space. The pseudo labels are then generated for those unlabeled samples whose feature values match the distribution with high probability. By performing extensive experiments on four publicly available datasets, we show that our framework can generate reliable pseudo labels to significantly improve the generalization capacity of several state-of-the-art DNNs. In addition, we introduce a new DNN for HSI classification that demonstrates outstanding accuracy results in comparison with its rivals.

Relevant
A Data Pipeline for Extraction and Processing of Electrocardiogram Recordings

Algorithmic exploitation of medical data for diagnostic purposes has become state of the art in the modern medical world. Applying artificial intelligence algorithms is gaining importance and electrocardiogram recordings have successfully been used as input for deep learning models and produce viable diagnoses. Algorithms are noninvasive, relatively low-cost and promise to have high diagnostic leverage. However, for supervised learning algorithms such as deep learning models the amount of high quality data labelled with correct diagnoses required for training is considerable. In this paper, we present a pipeline that processes raw electrocardiogram recordings preparing them for use in training and validation of neural network models. Although, the electrocardiogram is widely used, appropriately labelled training data is rare and provided in different formats and from technically different sources. Therefore, our end-to-end pipeline not only processes data from modern digital ECG devices, e.g. in XML file format, but can also extract all necessary information from PDF files (both scanned hard copies and digitally generated PDFs). We present a use case in which data from XML and PDF sources is read, cleaned and combined into a unified dataset to be used by a model predicting myocardial scar. Our pipeline will become a cornerstone of our environment for building AI based diagnostic instruments.

Relevant