SAS in ESA Datalabs: A new platform for XMM-Newton analysis
SAS in ESA Datalabs: A new platform for XMM-Newton analysis
7
- 10.1051/0004-6361/202245708
- Jun 1, 2023
- Astronomy & Astrophysics
32
- 10.1051/0004-6361/202040272
- Aug 1, 2021
- Astronomy & Astrophysics
2457
- 10.1051/0004-6361:20000066
- Jan 1, 2001
- Astronomy & Astrophysics
18
- 10.1051/0004-6361/202037807
- Sep 1, 2020
- Astronomy & Astrophysics
15
- 10.1051/0004-6361/202141751
- Mar 31, 2022
- Astronomy & Astrophysics
564
- 10.1051/0004-6361:20000044
- Jan 1, 2001
- Astronomy & Astrophysics
122
- 10.1086/507458
- Nov 1, 2006
- The Astrophysical Journal
1958
- 10.1051/0004-6361:20000087
- Jan 1, 2001
- Astronomy & Astrophysics
853
- 10.1051/0004-6361:20000058
- Jan 1, 2001
- Astronomy & Astrophysics
109
- 10.1051/0004-6361:200809956
- Oct 27, 2008
- Astronomy & Astrophysics
- Conference Article
9
- 10.1145/2723576.2723580
- Mar 16, 2015
This paper presents DOP8: a Data Mining Iterative Cycle that improves the classical data life cycle. While the latter only combines the data production and data analysis phases, DOP8 also integrates the analysis operators life cycle. In this cycle, data life cycle and operators life cycle processing meet in the data analysis step. This paper also presents a reification of DOP8 in a new computing platform: UnderTracks. The latter provides a flexibility on storing and sharing data, operators and analysis processes. Undertracks is compared with three types of platform 'Storage platform', 'Analysis platform' and 'Storage and Analysis platform'. Several real TEL analysis scenarios are present into the platform, (1) to test Undertracks flexibility on storing data and operators and (2) to test Undertracks flexibility on designing analysis processes.
- Single Report
- 10.2172/816448
- Jan 1, 2003
The scientific community has recently experienced an overall effort to reduce the physical size of many experimental components to the nanometer size range. This size is unique as the characteristics of this regime involve aspects of pure physics, biology, and chemistry. One extensively studied example of a nanometer sized experimental component, which acts as a junction between these three principle scientific theologies, is deoxyribonucleic acid (DNA) or ribonucleic acid (RNA). These biopolymers not only contain the biological genetic guide to code for the production of life-sustaining materials, but are also being probed by physicists as a means to create electrical circuits and furthermore as controllable architectural and sensor motifs in the chemical disciplines. Possibly the most common nano-sized component between these sciences are nanoparticles composed of a variety of materials. The cross discipline employment of nanoparticles is evident from the vast amount of literature that has been produced from each of the individual communities within the last decade. Along these cross-discipline lines, this dissertation examines the use of several different types of nanoparticles with a wide array of surface chemistries to understand their adsorption properties and to construct unique miniaturized analytical and immunoassay platforms. This introduction will act as a literature review to provide key information regarding the synthesis and surface chemistries of several types of nanoparticles. This material will set the stage for a discussion of assembling ordered arrays of nanoparticles into functional platforms, architectures, and sensors. The introduction will also include a short explanation of the atomic force microscope that is used throughout the thesis to characterize the nanoparticle-based structures. Following the Introduction, four research chapters are presented as separate manuscripts. Chapter 1 examines the self-assembly of polymeric nanoparticles exhibiting a variety of surface chemistries and attempts to deconvolute general adsorption rules for their assembly on various substrates. Chapter 2 extends the usage of self-assembly of polymeric nanoparticles through a layer-by-layer deposition concept and photolithography methodologies to create analytical platforms with a vertical height controlled within the nanometer regime. This platform is then furthered in Chapter 3 by employing this integrated concept as a bio-recognition platform, with the extension of the method to a high-throughput screening system explored. Chapter 4 exploits two different types of nanoparticles, silica and gold, as multiplexed, self-assembled immunoassay sensors. This final research chapter is followed by a general summation and future prospectus section that concludes the dissertation.
- Research Article
- 10.15407/sociology2023.04.089
- Dec 1, 2023
- Sociology: Theory, Methods, Marketing
The article describes the results of a study of the prevalence of non-standard employment in Ukraine and the relationship between non-standard employment and digitalization in Ukraine. The COVID-19 pandemic and the full-scale invasion of Russia on February 24, 2022, have radically affected the labor market and the spread of non-standard employment. A year and a half after the start of the full-scale war, the share of the unemployed has decreased, and some researchers and specialists have already begun to talk about "personnel hunger" in certain areas and industries. However, despite the activation of the labor market, the work of only 30% of Ukrainians can be identified as standard employment. Moreover, there is de facto no return to the standard employment format. Another factor contributing to the spread of non-standard forms of employment is digitalization. Digitalization is often used as a general all-encompassing concept but based on information about the use of digital devices and technologies, three categories can be distinguished, corresponding to three stages of technological development - computerization, Internetization, and digitalization. At the same time, digitization can be defined as the creation of information and analytical platforms that have analytical and predictive functions. The highest level of computerization and Internetization is observed among Ukrainians whose employment can be characterized as non-standard. The highest level of computerization, Internetization, and digitalization are among sole proprietors, respondents who can work remotely and have a flexible schedule. With a fairly high level of use of digital devices and technologies (62% of surveyed Ukrainians use a laptop, tablet or smartphone at work), this level is still lower than the European average. The further spread of Internetization and digitization (information and analytical solutions and platforms) can become one of the drivers of non-standard employment, which, given its magnitude, should be, as A. Kolot notes, completely and irreversibly transformed into a ordinary, traditional social and labor reality.
- Research Article
259
- 10.5858/arpa.2013-0691-ra
- Nov 1, 2014
- Archives of Pathology & Laboratory Medicine
Formalin fixation and paraffin embedding is a timeless, cost-efficient, and widely adopted method of preserving human tissue biospecimens that has resulted in a substantial reservoir of formalin-fixed, paraffin-embedded blocks that represent both the pathology and preanalytical handling of the biospecimen. This reservoir of specimens is increasingly being used for DNA, RNA, and proteomic analyses. To evaluate the impact of preanalytical factors associated with the formalin fixation and paraffin embedding process on downstream morphological and molecular endpoints. We surveyed the existing literature using the National Cancer Institute's Biospecimen Research Database for published reports investigating the potential influence of preanalytical factors associated with the formalin fixation and paraffin embedding process on DNA, RNA, protein, and morphological endpoints. Based on the literature evidence, the molecular, proteomic, and morphological endpoints can be altered in formalin-fixed, paraffin-embedded specimens by suboptimal processing conditions. While the direction and magnitude of effects associated with a given preanalytical factor were dependent on the analyte (DNA, RNA, protein, and morphology) and analytical platform, acceptable conditions are highlighted, and a summary of conditions that could preclude analysis is provided.
- Research Article
1
- 10.1158/1538-7445.am2017-2593
- Jul 1, 2017
- Cancer Research
Introduction: The Children’s Brain Tumor Tissue Consortium (CBTTC), an international repository of genomic and phenotypic data, has partnered with Blackfynn, Inc., to create a cloud-based data management platform to facilitate team-science across disciplines. Background: The CBTTC through the CHOP Department of Biomedical and Health Informatics (DBHi) has developed a network of informatics and data applications for researchers across the globe to work together and perform real-time analyses on existing clinical, phenotypic, and genomic data. Historically, rare disease datasets are siloed, locked in proprietary formats, segregated by data types, and hidden from the view of experts in the field. This has been a significant barrier to finding effective therapeutics for children with pediatric brain tumors. Blackfynn was founded by a group of multidisciplinary experts in neuroscience, neurology, medicine, software development, engineering, computer science and business with the goal to empower researchers to cure neurologic disease and provide solutions to these challenges. Description of Methods: The CBTTC and Blackfynn teamed up to provide a cloud-based, team-focused data management and analytics platform. The platform provides a commercial grade, scalable approach to upload, view, and integrate digital pathology images with relevant subject data such as MRIs, pathology reports and genomic information. Stakeholders can search integrated data without requiring users to change their current workflow or conform to imposed data standards. This platform is a simple, intuitive, end-to-end software platform for teams of scientists and pathologists to review, annotate and discuss cases, enabling rapid diagnostic consensus, quality control, and empowered discovery. Summary of Unpublished Results: The CBTTC/Blackfynn data platform enabled CBTTC members to engage in a cross-institutional collaboration to reach consensus on digital pathology data in ways that were previously not possible. We demonstrated that this solution removes existing barriers to collaborative efforts and provides a rich analytic and discovery platform bridging imaging with genomics and other data formats. The platform provides a new model for the scientific community to facilitate translation towards improved treatments for children diagnosed with brain tumors. Discussion and Future Direction: This pilot project will be scaled to other CBTTC sites for centralized review of pathology images to enable the research community to collaborative on specific projects. The next phase of platform development will include further integration CBTTC platforms fully integrating genomics data, and side-by-side viewing and analyses of MRI, pathology and clincal data to facilitate specific project work around large and complex research data types in a cloud environment. Citation Format: Amanda Christini, Angela J. Waanders, Joost B. Wagenaar, Alex S. Felmeister, Mariarita Santi, Nitin R. Wadhwani, Jennifer L. Mason, Mateusz P. Koptyra, Jena V. Lilly, Jeffrey W. Pennington, Rishi R. Lulla, Adam C. Resnick. Accelerating pediatric brain tumor research through team science solutions [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2017; 2017 Apr 1-5; Washington, DC. Philadelphia (PA): AACR; Cancer Res 2017;77(13 Suppl):Abstract nr 2593. doi:10.1158/1538-7445.AM2017-2593
- Research Article
51
- 10.3389/fbioe.2022.832059
- Feb 9, 2022
- Frontiers in bioengineering and biotechnology
Biopharmaceuticals are one of the fastest-growing sectors in the biotechnology industry. Within the umbrella of biopharmaceuticals, the biosimilar segment is expanding with currently over 200 approved biosimilars, globally. The key step towards achieving a successful biosimilar approval is to establish analytical and clinical biosimilarity with the innovator. The objective of an analytical biosimilarity study is to demonstrate a highly similar profile with respect to variations in critical quality attributes (CQAs) of the biosimilar product, and these variations must lie within the range set by the innovator. This comprises a detailed comparative structural and functional characterization using appropriate, validated analytical methods to fingerprint the molecule and helps reduce the economic burden towards regulatory requirement of extensive preclinical/clinical similarity data, thus making biotechnological drugs more affordable. In the last decade, biosimilar manufacturing and associated regulations have become more established, leading to numerous approvals. Biosimilarity assessment exercises conducted towards approval are also published more frequently in the public domain. Consequently, some technical advancements in analytical sciences have also percolated to applications in analytical biosimilarity assessment. Keeping this in mind, this review aims at providing a holistic view of progresses in biosimilar analysis and approval. In this review, we have summarized the major developments in the global regulatory landscape with respect to biosimilar approvals and also catalogued biosimilarity assessment studies for recombinant DNA products available in the public domain. We have also covered recent advancements in analytical methods, orthogonal techniques, and platforms for biosimilar characterization, since 2015. The review specifically aims to serve as a comprehensive catalog for published biosimilarity assessment studies with details on analytical platform used and critical quality attributes (CQAs) covered for multiple biotherapeutic products. Through this compilation, the emergent evolution of techniques with respect to each CQA has also been charted and discussed. Lastly, the information resource of published biosimilarity assessment studies, created during literature search is anticipated to serve as a helpful reference for biopharmaceutical scientists and biosimilar developers.
- Preprint Article
- 10.5194/egusphere-egu23-10177
- May 15, 2023
NASA's Visualization, Exploration, and Data Analysis (VEDA) project is an open-source science cyberinfrastructure for data processing, visualization, exploration, and geographic information systems (GIS) capabilities (https://www.earthdata.nasa.gov/esds/veda). VEDA was an ambitious platform and one that was only made possible in the past year by building upon existing NASA projects. The extensive technology community at NASA continues to come together to design, build and use VEDA’s interoperable APIs and datasets.This presentation will demo the current capabilities of VEDA and discuss how these capabilities were designed and architected with the central goals of science delivery, reproducible science, and interoperability to support re-use of data and APIs across NASA’s Earth Science ecosystem of tools. The presentation will close with VEDA’s future plans. In 2023, VEDA will support NASA’s Transform to Open Science (TOPS) program and open-source science initiatives through data, APIs and analytics platforms. In 2023 and beyond, VEDA will advance the state of the art in cloud-based Earth science as well as strengthening the ties of technology within NASA.The projects behind VEDA’s current features are:The Multi-Mission Algorithm and Analysis Platform (https://maap-project.org/, presented at EGU 2019): Recognizing the numerous advantages of open, reproducible science, NASA and ESA are working together to create the Joint ESA-NASA MAAP. The MAAP brings together relevant data and algorithms in a common virtual environment in order to support the global aboveground terrestrial carbon dynamics research community.  The COVID-19 Earth Observation Dashboard (https://www.earthdata.nasa.gov/covid19/): Following the interest in this dashboard, NASA invested in the design and development of a new dashboard infrastructure. This infrastructure is highly configurable to support easily adding new datasets and discoveries. UI and config layers are built upon the VEDA STAC catalog and Cloud-Optimized GeoTIFFs. The Earthdata Information Systems (EIS) pilots (https://eis.smce.nasa.gov/): Scientists at NASA worked together on open science tools to develop new research projects using Earth Observation data across the domains of fire, freshwater, greenhouse gasses, and sea level rise. ArcGIS Enterprise in the Cloud (gis.earthdata.nasa.gov) provides GIS capabilities. The projects listed above have all made VEDA a reality in a year. The scientists from EIS are using the new dashboard infrastructure to tell their stories and the analytics backend from MAAP to scale their science.In 2023, VEDA plans many initiatives in the work to extend its reach within and beyond NASA. There are many advanced technologies at NASA and we see an opportunity for VEDA to support closing the information gaps across groups. For example, VEDA will support driving standards for using, publishing and visualizing NASA’s Earthdata Zarr archives and also deliver interoperable APIs for its data stores to support dynamic data visualization and storytelling.VEDA will also extend its reach beyond NASA by providing a JupyterHub for any user to explore the data behind NASA Earth Science, specifically the discoveries presented in the Earthdata Dashboard.
- Research Article
15
- 10.1208/s12248-012-9321-1
- Jan 19, 2012
- The AAPS Journal
The 21st Century Bioanalytical Laboratory Platforms initiative in the BIOTEC section of the American Association of Pharmaceutical Scientists (AAPS) began with a pre-conference workshop held in Seattle, WA in June of 2009. This workshop brought together members of the pharmaceutical and biotechnology industries, with instrument and reagent manufacturers to discuss the current and potential future state of the bioanalytical laboratories supporting biologics development. At the conclusion of the workshop, four sub-teams were formed to further develop the ideas and concepts raised during the 2-day workshop. The sub-teams are reagents, automation, e-solutions, and platforms. This paper discusses the critical attributes of a research and development ligand binding assay (LBA) platform and the desired characteristics new platforms should strive to offer in the future. This paper is not intended to be a review and comparison of the current platforms on the market, as this has been done and published elsewhere (1–9). The platforms team consists of a balanced cross-section of the industry with representatives from pharmaceutical, biotechnology, contract research organizations, and instrument manufacturers. The Platforms team have collaborated to discuss and arrive at a consensus regarding the most useful characteristics of a bioanalytical platform for biologics. We present here the results of these discussions. A platform is the technology employed in an analytical method to transduce a biochemical event into a measureable output or signal. This signal allows the bioanalytical scientist to accurately and reproducibly make measurements to analyze different aspects of a specific biologic target (therapeutic, biomarker, and anti-drug antibody) such as its pharmacokinetics, immunogenicity, potency, or effect of biomarkers. An instrument is the tool utilized minimally to measure a platform’s output and convert the resultant signal into interpretable information the analytical scientist can use but can incorporate other aspects such as liquid handling. Many platforms employ optical signals including the absorbance of light through a medium (10) or the emission of fluorescence (10) or luminescence (11). A variety of light detectors are used to measure these optical signals including photo diodes, charge-coupled device cameras, and photo-multiplier tubes (Table I). Sections in this paper provide details on the desirable analytical characteristics, multiplexing, platform flexibility and throughput, desirable instrument characteristics, and finally, life cycle management of the ideal LBA platform. Table I Commonly Used Platforms The analytical characteristics of today’s ligand binding assays are primarily influenced by three major factors—the quality of the reagents, assay format, and the choice of the analytical platform. This paper describes only those aspects derived from the analytical platform.
- Research Article
90
- 10.1093/database/bau080
- Jan 1, 2014
- Database: The Journal of Biological Databases and Curation
Since 2002, information on individual microRNAs (miRNAs), such as reference names and sequences, has been stored in miRBase, the reference database for miRNA annotation. As a result of progressive insights into the miRNome and its complexity, miRBase underwent addition and deletion of miRNA records, changes in annotated miRNA sequences and adoption of more complex naming schemes over time. Unfortunately, miRBase does not allow straightforward assessment of these ongoing miRNA annotation changes, which has resulted in substantial ambiguity regarding miRNA identity and sequence in public literature, in target prediction databases and in content on various commercially available analytical platforms. As a result, correct interpretation, comparison and integration of miRNA study results are compromised, which we demonstrate here by assessing the impact of ignoring sequence annotation changes. To address this problem, we developed miRBase Tracker (www.mirbasetracker.org), an easy-to-use online database that keeps track of all historical and current miRNA annotation present in the miRBase database. Three basic functionalities allow researchers to keep their miRNA annotation up-to-date, reannotate analytical miRNA platforms and link published results with outdated annotation to the latest miRBase release. We expect miRBase Tracker to increase the transparency and annotation accuracy in the field of miRNA research.Database URL:www.mirbasetracker.org
- Preprint Article
3
- 10.6084/m9.figshare.791638.v1
- Sep 6, 2013
With the proliferation of large irregular sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query and compute on the topological structure of these relationships represented as set(s) of edges between set(s) of vertices. To store and process Facebook-scale datasets, they must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we explore a variety of graph analysis and storage platforms. We compare their capabil- ities, interfaces, and performance by implementing and computing a set of real-world graph algorithms on synthetic graphs with up to 256 million edges. In the spirit of full disclosure, several authors are affiliated with the development of STINGER.
- Research Article
112
- 10.1038/s41374-018-0123-7
- Jan 1, 2019
- Laboratory Investigation
Ki67 reproducibility using digital image analysis: an inter-platform and inter-operator study
- Research Article
4
- 10.1093/nar/gkz387
- May 20, 2019
- Nucleic Acids Research
As antibodies are a very important tool for diagnosis, therapy, and experimental biology, a large number of antibody structures and sequences have become available in recent years. Therefore, tools that allow the analysis, comparison, and visualization of this large amount of antibody data are crucially needed. We developed the antibody high-density alignment visualization and analysis (Yvis) platform to provide an innovative, robust and high-density data visualization of antibody sequence alignments, called Collier de Diamants. The Yvis platform also provides an integrated structural database, which is updated weekly, and many different search and filter options. This platform can help to formulate hypotheses concerning the key residues in antibody structures or interactions to improve the understanding of antibody properties. The Yvis platform is available at http://bioinfo.icb.ufmg.br/yvis/.
- Research Article
105
- 10.1016/j.earscirev.2022.104191
- Sep 19, 2022
- Earth-Science Reviews
Terrain is considered one of the most essential natural geographic features and is a key factor in physical processes. Geomorphometry and terrain analyses have provided a wealth of topographic data and corresponding tools, thus delivering insights into geomorphology, hydrology, soil science, and geographic information systems (GIS) in general. Recent advances in analysis theory, analysis methods, data-acquisition techniques and analysis platforms are impressive in their ability to interpret not only multiscale and multiaspect topographic characteristics but also the mechanisms and processes associated with terrain morphodynamics. In this context, we review progress in the fields of geomorphometry and terrain analysis, as well as the probable future paths of these two fields. In the data collection and construction processes, novel models and acquisition techniques can support the expression of complex terrain, and scholars have explored data-related challenges such as the accuracy and security of the utilized data. Terrain analyses have also been successful in constructing efficient analysis frameworks, transforming analysis units and methodologies, and highlighting the semantics of the analysis object as well as the continuity of Earth's surface processes. Moreover, terrain-related research and complex calculations have been aided by various analysis tools and platforms that have powerful and efficient processing capabilities. Furthermore, the application scopes of geomorphometry and terrain analysis have been broadened, especially in cross-analyses in which these techniques can be integrated with other disciplines.
- Research Article
2
- 10.1088/1757-899x/715/1/012029
- Jan 1, 2020
- IOP Conference Series: Materials Science and Engineering
In view of the global information management requirements of intermodal containers, combined with the development status, this paper proposes an IoT model based on LEO satellites, and a big data management and analysis platform based on the model. This paper designs and analyses the scheme of the platform, and proposes a middleware approach to solve the problem of multi-source heterogeneous data access of containers intelligent terminals. Then, this paper designs business processes such as user and device registration, middleware registration, historical data access, and timely data access for the platform. The big data analysis and management platform has been prototyped and verified; the platform has the advantages of compatibility, openness and convenience. This study has certain value.
- Research Article
80
- 10.1016/j.trac.2014.05.004
- Jun 19, 2014
- TrAC Trends in Analytical Chemistry
Chemometrics in foodomics: Handling data structures from multiple analytical platforms
- Research Article
- 10.1016/j.ascom.2025.100990
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100969
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100999
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100989
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100993
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100971
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100986
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100974
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100973
- Oct 1, 2025
- Astronomy and Computing
- Research Article
- 10.1016/j.ascom.2025.100985
- Oct 1, 2025
- Astronomy and Computing
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.