High throughput approaches to designer products—myth or reality
High throughput approaches to designer products—myth or reality
- Research Article
- 10.1002/biot.201200057
- Oct 1, 2012
- Biotechnology Journal
BiotecVisions 2012, October
- Research Article
1
- 10.1016/j.ejps.2025.107113
- Jun 1, 2025
- European journal of pharmaceutical sciences : official journal of the European Federation for Pharmaceutical Sciences
Antibody oxidation and impact of formulation: A high-throughput screening approach.
- Research Article
5
- 10.3389/feduc.2021.711512
- Aug 9, 2021
- Frontiers in Education
As educators and researchers, we often enjoy enlivening classroom discussions by including examples of cutting-edge high-throughput (HT) technologies that propelled scientific discovery and created repositories of new information. We also call for the use of evidence-based teaching practices to engage students in ways that promote equity and learning. The complex datasets produced by HT approaches can open the doors to discovery of novel genes, drugs, and regulatory networks, so students need experience with the effective design, implementation, and analysis of HT research. Nevertheless, we miss opportunities to contextualize, define, and explain the potential and limitations of HT methods. One evidence-based approach is to engage students in realistic HT case studies. HT cases immerse students with messy data, asking them to critically consider data analysis, experimental design, ethical implications, and HT technologies.The NSF HITS (High-throughput Discovery Science and Inquiry-based Case Studies for Today’s Students) Research Coordination Network in Undergraduate Biology Education seeks to improve student quantitative skills and participation in HT discovery. Researchers and instructors in the network learn about case pedagogy, HT technologies, publicly available datasets, and computational tools. Leveraging this training and interdisciplinary teamwork, HITS participants then create and implement HT cases. Our initial case collection has been used in >15 different courses at a variety of institutions engaging >600 students in HT discovery. We share here our rationale for engaging students in HT science, our HT cases, and network model to encourage other life science educators to join us and further develop and integrate HT complex datasets into curricula.
- Research Article
- 10.1177/2211068213481652
- May 15, 2013
- SLAS Technology
Automation Highlights from the Literature
- Book Chapter
- 10.1016/b978-0-443-15250-4.00003-4
- Jan 1, 2023
- All About Bioinformatics
Chapter 7 - High throughput technology
- Research Article
3
- 10.1002/wcm.2265
- Oct 11, 2012
- Wireless Communications and Mobile Computing
To be compatible with the legacy 802.11, there are two major medium access control (MAC) behaviors, high throughput (HT) and non-high throughput (non-HT), in the 802.11n. In this paper, we analyze and compare the energy efficiencies of different MAC behaviors in 802.11n on the basis of the Bianchi model and our previous works to evaluate the performance of the different MAC behaviors regarding HT and non-HT. Our studies try to provide the decision for the mobile stations to enable the HT of 802.11n or not based on the consideration of energy efficiency. Studies show that owing to the large power consumption in HT, it is not suitable for limited power devices to carry WWW traffics by multiple-input multiple-output transmission because of large overheads of physical layer in the HT mode. However, if large file transmissions by File Transfer Protocol are considered, the energy efficiency in HT MAC can be very high because of the large aggregated frame size. It is especially true when the number of active stations is large because of the decrease in idle listening time by using the techniques applied in HT MAC such as Aggregate MAC Protocol Data Unit and Block-ACK. These characteristics in the HT mode can overwhelm the larger overheads of physical layer compared with that in the non-HT mode when large files are needed to be uploaded. Copyright © 2012 John Wiley & Sons, Ltd.
- Book Chapter
- 10.1016/b978-0-323-67320-4.00080-8
- Jun 8, 2021
- Henry's Clinical Diagnosis and Management by Laboratory Methods
80 - High-Throughput Genomic and Proteomic Technologies in the Postgenomic Era
- Front Matter
- 10.3389/fped.2013.00036
- Nov 20, 2013
- Frontiers in Pediatrics
SPECIALTY GRAND CHALLENGE article Front. Pediatr., 20 November 2013Sec. Genetics of Common and Rare Diseases Volume 1 - 2013 | https://doi.org/10.3389/fped.2013.00036
- Research Article
3
- 10.3389/fdata.2021.725095
- Sep 27, 2021
- Frontiers in Big Data
Background: Accuracy and reproducibility are vital in science and presents a significant challenge in the emerging discipline of data science, especially when the data are scientifically complex and massive in size. Further complicating matters, in the field of genomic-based science high-throughput sequencing technologies generate considerable amounts of data that needs to be stored, manipulated, and analyzed using a plethora of software tools. Researchers are rarely able to reproduce published genomic studies. Results: Presented is a novel approach which facilitates accuracy and reproducibility for large genomic research data sets. All data needed is loaded into a portable local database, which serves as an interface for well-known software frameworks. These include python-based Jupyter Notebooks and the use of RStudio projects and R markdown. All software is encapsulated using Docker containers and managed by Git, simplifying software configuration management. Conclusion: Accuracy and reproducibility in science is of a paramount importance. For the biomedical sciences, advances in high throughput technologies, molecular biology and quantitative methods are providing unprecedented insights into disease mechanisms. With these insights come the associated challenge of scientific data that is complex and massive in size. This makes collaboration, verification, validation, and reproducibility of findings difficult. To address these challenges the NGS post-pipeline accuracy and reproducibility system (NPARS) was developed. NPARS is a robust software infrastructure and methodology that can encapsulate data, code, and reporting for large genomic studies. This paper demonstrates the successful use of NPARS on large and complex genomic data sets across different computational platforms.
- Research Article
88
- 10.1074/mcp.r800014-mcp200
- Jan 1, 2009
- Molecular & Cellular Proteomics
The recent explosion of high throughput experimental technologies for characterizing protein interactions has generated large amounts of data describing interactions between thousands of proteins and producing genome scale views of protein assemblies. The systems level views afforded by these data hold great promise of leading to new knowledge but also involve many challenges. Deriving meaningful biological conclusions from these views crucially depends on our understanding of the approximation and biases that enter into deriving and interpreting the data. The challenges and rewards of interaction proteomics are reviewed here using as an example the latest comprehensive high throughput analyses of protein interactions in yeast.
- Research Article
- 10.1149/ma2020-01151020mtgabs
- May 1, 2020
- Electrochemical Society Meeting Abstracts
In the sessions of Materials Genome Initiative (MGI) and Materials Informatics (MI) at recent international and domestic conferences, majority of research presentations are on material prediction using a combination of computational chemistry and machine learning. On the other hand, the contents incorporating high-throughput synthesis and evaluation experiments are only occasionally seen. It is also true that the number of samples that can be handled per hour is small compared to the former data science.However, if a series of experimental processes linked to synthesis, evaluation, and analysis is made user-friendly and high-throughput, it is possible to efficiently generate multi-condition and multi-component data in a unified experimental environment such as starting materials. As a result, even when creating a data set by extracting text from a paper, the data shortage can be supplemented with experimental data.Unlike thin films and polymers, powder synthesis is a heterogeneous reaction that requires the use of a wet process to perform high-throughput experiments. We have hitherto developed powder synthesis apparatus equipped with a micropump for solution dispensing, and high-throughput exploration system for powder and thin-film library based on electrostatic spray deposition consisted by combining multiple syringe pumps and a high-voltage power supply. By using these system, we have been exploring multi-component cathode materials for lithium ion battery and thermoelectric materials. In addition, recently, high-throughput experiments in a high-pressure environment of 200 MPa and 500ºC are possible.If the time required for physical property evaluation and data analysis is the same as before, even if only the synthesis based on experiments becomes high throughput, the attractiveness of MI will be weakened. Therefore, it is necessary to continue to develop jigs and software for efficient evaluation and analysis of huge libraries obtained in high-throughput experiments.In this presentation, we will introduce the exploration for A-site and B-site substitutes of perovskite-type CaMnO3, which is expected as an n-type thermoelectric material. In the preliminary search, it was confirmed that Ca1-x Bi x Mn1-y Ni y O3 contributes to the improvement of thermoelectric power by substituting elements at each site. As the next step, the amount of Bi substitution was determined to be within 10% from the combination of high-throughput synthesis and high-throughput Seebeck coefficient measurement. Synchrotron XRD and XAFS measurements were used to investigate the correlation between physical properties and crystal structure. We have succeeded in collecting data without filling the capillaries and making pellets by developing special tools. Synchrotron XRD and XAFS measurements were used to investigate the correlation between physical properties and crystal structure. We have succeeded in collecting data without filling the capillaries and making pellets by developing special tools. We also developed software that automates Rietveld analysis, although we need basic knowledge to set up an initial structural model. As a result, we have made it easier to generate data sets for visualizing crystallographic data and physical property data at once. In the obtained Bi-substituted CaMnO3 powder, when Bi was 8% or less, the conductivity and power factor increased due to the increase in carrier concentration. At Bi8% and higher, PF reaches its limit value, which involves a large change in the MnO6 octahedral plane bond distances Mn-O1 (1) and Mn-O1 (2), that is, a significant increase in octahedral distortion. High-throughput experiments including the collection of crystallographic information through the development of these technologies are expected to be an effective tool for future machine learning.
- Research Article
94
- 10.2174/138620709787581701
- Mar 1, 2009
- Combinatorial Chemistry & High Throughput Screening
Current advances of new technologies with robotic automated assays combined with highly selective and sensitive LC-MS enable high-speed screening of lead series libraries in many in vitro assays. In this review, we summarize state of the art high throughput assays for screening of key physicochemical properties such as solubility, lipophilicity, pKa, drug-plasma protein binding and brain tissue binding as well as in vitro ADME profiling. We discuss two primary approaches for high throughput screening of solubility, i.e. an automated 96-well plate assay integrated with LC-MS and a rapid multi-wavelength UV plate reader. We address the advantages of newly developed miniaturized techniques for high throughput pKa screening by capillary electrophoresis combined with mass spectrometry (CE-MS) with automated data analysis flow. Several new lipophilicity approaches other than octanol-water partitioning are critically reviewed, including rapid liquid chromatographic retention based approach, immobilized artificial membrane (IAM) partitioning and liposome, and potential microemulsion electrokinetic chromatography (MEEKC) for accurate screening of LogP. We highlight the sample pooling (namely cassette dosing, all-in-one, cocktail) as an efficient approach for high throughput screening of physicochemical properties and in vitro ADME profiling with emphasis on the benefit of on-line quality control. This cassette dosing approach has been widely adapted in drug discovery for rapid screening of in vivo pharmacokinetic parameters with significantly increased capacity and dramatically reduced animal usage.
- Preprint Article
- 10.7490/f1000research.1113884.1
- Apr 6, 2017
- F1000Research
RegulonDB contains information manually curated for different transcriptional regulation pieces of the E. coli K-12 genome. Currently, with the development of high-throughput (HT) technologies, huge amounts of information related to the regulation of genetic information are being generated. The management and integration of this kind of information are difficult due to the lack of well-defined data management processes. This has motivated us to implement tools that allow the extraction, handling, storing, and curation of these data to improve the overall understanding of transcriptional gene regulation.
- Conference Article
22
- 10.1142/9789812836939_0042
- Nov 1, 2008
High-throughput (HTP) technologies offer the capability to evaluate the genome, proteome, and metabolome of an organism at a global scale. This opens up new opportunities to define complex signatures of disease that involve signals from multiple types of biomolecules. However, integrating these data types is difficult due to the heterogeneity of the data. We present a Bayesian approach to integration that uses posterior probabilities to assign class memberships to samples using individual and multiple data sources; these probabilities are based on lower-level likelihood functions derived from standard statistical learning algorithms. We demonstrate this approach on microbial infections of mice, where the bronchial alveolar lavage fluid was analyzed by three HTP technologies, two proteomic and one metabolomic. We demonstrate that integration of the three datasets improves classification accuracy to approximately 89% from the best individual dataset at approximately 83%. In addition, we present a new visualization tool called Visual Integration for Bayesian Evaluation (VIBE) that allows the user to observe classification accuracies at the class level and evaluate classification accuracies on any subset of available data types based on the posterior probability models defined for the individual and integrated data.
- Research Article
5
- 10.3390/fermentation10010033
- Dec 30, 2023
- Fermentation
Genetic engineering and directed evolution are effective methods for addressing the low yield and poor industrialization level of microbial target products. The current research focus is on how to efficiently and rapidly screen beneficial mutants from constructed large-scale mutation libraries. Traditional screening methods such as plate screening and well-plate screening are severely limited in their development and application due to their low efficiency and high costs. In the past decade, microfluidic technology has become an important high-throughput screening technology due to its fast speed, low cost, high automation, and high screening throughput, and it has developed rapidly. Droplet-based microfluidic high-throughput screening has been widely used in various fields, such as strain/enzyme activity screening, pathogen detection, single-cell analysis, drug discovery, and chemical synthesis, and has been widely applied in industries such as those involving materials, food, chemicals, textiles, and biomedicine. In particular, in the field of enzyme research, droplet-based microfluidic high-throughput screening has shown excellent performance in discovering enzymes with new functions as well as improved catalytic efficiency or stability, acid-base tolerance, etc. Currently, droplet-based microfluidic high-throughput screening technology has achieved the high-throughput screening of enzymes such as glycosidase, lipase, peroxidase, protease, amylase, oxidase, and transaminase as well as the high-throughput detection of products such as riboflavin, coumarin, 3-dehydroquinate, lactic acid, and ethanol. This article reviews the application of droplet-based microfluidics in high-throughput screening, with a focus on high-throughput screening strategies based on UV, visible, and fluorescence spectroscopy, including labeled optical signal detection screening, as well as label-free electrochemical detection, mass spectrometry, Raman spectroscopy, nuclear magnetic resonance, etc. Furthermore, the research progress and development trends of droplet-based microfluidic technology in enzyme modification and strain screening are also introduced.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.