Abstract

AbstractThe main objective of High‐Throughput Experimentation (HTE) in catalysis and materials discovery, as well as in other areas, is to increase the total experiment count for a given time interval either by speeding up individual experiments and/or by running experiments in parallel. Typically, such screening experiments involve the preparation of sample libraries consisting of a large number of diverse materials together with the extensive variation of conditions during performance tests within a wide parameter space. As an important difference, when compared to molecular high‐throughput screening, materials science does not always deal with uniquely defined entities. Every parameter during preparation and testing may be a factor crucial for the performance of the material. As a consequence, all experimental parameters should be controlled or at least recorded to be able to identify important correlations. The prime goal of the HTE cycle lies in speeding up the whole discovery and optimization process, but minimizing the costs and human efforts needed in the experimental workflow. In short, an increase in productivity will always result in a faster knowledge gain and therefore be a competitive advantage. This goal is only to be achieved by utilizing software tools at every single stage of the HTE process. In addition, the software platform has to provide an interface for data‐mining/feature‐extraction tools to gain insights required for discovering new useful materials. The software requirements in the HTE area can be classified according to the following aspects: “support and tracking of material preparation (workflow management)”, “planning and setting up experiments/performance tests”, “process automation and control of hardware devices”, “data logging and post‐processing”, “data storage, management and analysis”. Herein we present a modular approach to an HTE software platform. Instead of a monolithic master system, small tools with a limited set of tasks are interconnected using standardized, self‐descriptive data structures. This approach is highly flexible with respect to the rapidly changing needs of the chemists: Since the modules are isolated and inter‐module communication is standardized, new components (e.g., new devices) can be integrated into the process without any side effects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.