Abstract

High-throughput microarray technologies have long been a source of data for a wide range of biomedical investigations. Over the decades, variants have been developed and sophistication of measurements has improved, with generated data providing both valuable insight and considerable analytical challenge. The cost-effectiveness of microarrays, as well as their fundamental applicability, made them a first choice for much early genomic research and efforts to improve accessibility, quality and interpretation have continued unabated. In recent years, however, the emergence of new generations of sequencing methods and, importantly, reduction of costs, has seen a preferred shift in much genomic research to the use of sequence data, both less ‘noisy’ and, arguably, with species information more directly targeted and easily interpreted. Nevertheless, new microarray data are still being generated and, together with their considerable legacy, can offer a complementary perspective on biological systems and disease pathogenesis. The challenge now is to exploit novel methods for enhancing and combining these data with those generated by alternative high-throughput techniques, such as sequencing, to provide added value. Augmentation and integration of microarray data and the new horizons this opens up, provide the theme for the papers in this Special Issue.

Highlights

  • Much effort in recent years has focused on building tools and adapting statistical analyses to enhance value and facilitate integration of different data types

  • The papers in this Special Issue reflect a number of aspects of this effort, from novel augmentation of microarray data to derivation of a framework and methods for combined analyses of data from different sources

  • The authors evaluate genomic fingerprints for Bacillus anthracis, obtained by virtual hybridization, producing patterns which simulate DNA microarrays, in order to distinguish between highly-related bacterial strains

Read more

Summary

Introduction

As the first true high-throughput technologies for genomics, the generation of large microarray datasets led to issues of quality and the potential for sharing and motivated development of the MIAME (Minimum Information About a Microarray Experiment) standard in the early 2000s. Despite the availability of public databases, limitations such as technology differences, concentration range reliability, multiple or omitted genes, with concomitant ‘noise’ issues for data analysis and interpretation, have persisted.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.