Abstract
The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie
Highlights
Background and motivationWith rapid technological development such as DNA sequencing and brain imaging, ever increasing volumes of massive datasets have been produced
The results indicate that given the verbose queries, and in the presence of an effective keyword detection method, we are unable to gain a significant benefit from the Blind Relevance Feedback’ (BRF) expansion method
The results show that applying learning to rank (LTR) in the scarce training data environment causes overfitting, and the final model causes 5.1% degradation in NDCG, compared to the IROpt (The reason that we observe some difference in IROpt models in Tables 4 and 5 is that, as mentioned in the section Experimental setup, for the LTR part we fixed all the information retrieval parameters in Tables 2 and 3 and assumed there is a universal tuned parameter settings which can be used in the domain.) system
Summary
Background and motivationWith rapid technological development such as DNA sequencing and brain imaging, ever increasing volumes of massive datasets have been produced. The NCBI Gene Expression Omnibus has to-date (November 2017) archived >91 000 experimental studies, which comprise >2 million samples. Such massive amounts of openly accessible data offer unprecedented opportunities to advance our understanding of biology, human health and diseases. In Eric Green’s presentation on ‘NIH and Biomedical ‘Big Data,’ the first ‘major problems to solve’ for big data is ‘Locating the data.’. This is the challenge on which we focus in this paper: developing and evaluating techniques for finding relevant biomedical datasets In a perspective article, which describes NIH’s vision of Big Data to Knowledge (BD2K) [1], Margolis et al pointed out that ‘A fundamental question for BD2K is how to enable the identification, access and citation of (i.e. credit for) biomedical data.’ In Eric Green’s presentation on ‘NIH and Biomedical ‘Big Data,’ the first ‘major problems to solve’ for big data is ‘Locating the data.’ This is the challenge on which we focus in this paper: developing and evaluating techniques for finding relevant biomedical datasets
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Database : the journal of biological databases and curation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.