Fast DNA reports for investigative leads in casework practice: An automated workflow for mixture analysis using database searching based on probabilistic genotyping.
Fast DNA reports for investigative leads in casework practice: An automated workflow for mixture analysis using database searching based on probabilistic genotyping.
42
- 10.1016/j.fsigen.2019.06.005
- Jun 10, 2019
- Forensic Science International: Genetics
20
- 10.1016/j.fsigen.2019.03.024
- Apr 2, 2019
- Forensic Science International: Genetics
226
- 10.1016/j.fsigen.2015.11.008
- Nov 30, 2015
- Forensic Science International: Genetics
45
- 10.3390/genes12101559
- Sep 30, 2021
- Genes
1
- 10.1016/j.fsigss.2022.10.054
- Oct 20, 2022
- Forensic Science International: Genetics Supplement Series
20
- 10.1016/j.fsigen.2014.03.012
- Mar 30, 2014
- Forensic Science International: Genetics
14
- 10.1016/j.fsigen.2018.10.019
- Nov 2, 2018
- Forensic Science International: Genetics
9
- 10.1016/j.fsigen.2020.102390
- Sep 7, 2020
- Forensic Science International: Genetics
2
- 10.1016/j.fsigen.2023.102884
- May 2, 2023
- Forensic Science International: Genetics
24
- 10.1016/j.fsigen.2019.102150
- Aug 23, 2019
- Forensic Science International: Genetics
- Research Article
13
- 10.1016/j.scijus.2022.01.003
- Jan 20, 2022
- Science & Justice
Probabilistic genotyping of single cell replicates from complex DNA mixtures recovers higher contributor LRs than standard analysis
- Research Article
4
- 10.3390/genes14030674
- Mar 8, 2023
- Genes
Probabilistic genotyping (PG) and its associated software has greatly aided in forensic DNA mixture analysis, with it primarily being applied to mixed DNA profiles obtained from bulk cellular extracts. However, these software applications do not always result in probative information about the identity of all donors to said mixtures/extracts. This is primarily due to mixture complexity caused by overlapping alleles and the presence of artifacts and minor donors. One way of reducing mixture complexity is to perform direct single cell subsampling of the bulk mixture prior to genotyping and interpretation. The analysis of low template DNA samples, including from single or few cells, has also benefited from the application of PG methods. With the application of PG, multiple cell subsamples originating from the same donor can be combined into a single analysis using the software replicate analysis function often resulting in full DNA profile donor information. In the present work, we demonstrate how two PG software systems, STRmixTM and EuroForMix, were successfully validated for single or few cell applications.
- Research Article
7
- 10.1016/j.fsigen.2022.102738
- Jun 8, 2022
- Forensic Science International: Genetics
Development and validation of a fast and automated DNA identification line
- Research Article
8
- 10.3390/genes13091658
- Sep 15, 2022
- Genes
Analysis of complex DNA mixtures comprised of related individuals requires a great degree of care due to the increased risk of falsely including non-donor first-degree relatives. Although alternative likelihood ratio (LR) propositions that may aid in the analysis of these difficult cases can be employed, the prior information required for their use is not always known, nor do these alternative propositions always prevent false inclusions. For example, with a father/mother/child mixture, conditioning the mixture on the presence of one of the parents is recommended. However, the definitive presence of the parent(s) is not always known and an assumption of their presence in the mixture may not be objectively justifiable. Additionally, the high level of allele sharing seen with familial mixtures leads to an increased risk of underestimating the number of contributors (NOC) to a mixture. Therefore, fully resolving and identifying each of the individuals present in familial mixtures and excluding related non-donors is an important goal of the mixture deconvolution process and can be of great investigative value. Here, firstly, we further investigated and confirmed the problems encountered with standard bulk analysis of familial mixtures and demonstrated the ability of single cell analysis to fully distinguish first-degree relatives (FDR). Then, separation of each of the individual donors via single cell analysis was carried out by a combination of direct single cell subsampling (DSCS), enhanced DNA typing, and probabilistic genotyping, and applied to three complex familial 4-person mixtures resulting in a probative gain of LR for all donors and an accurate determination of the NOC. Significantly, non-donor first-degree relatives that were falsely included (LRs > 102–108) by a standard bulk sampling and analysis approach were no longer falsely included using DSCS.
- Research Article
1
- 10.1016/j.fsigss.2019.10.101
- Oct 17, 2019
- Forensic Science International: Genetics Supplement Series
Completion of the MIX 13 case study by evaluation of mock mixtures with the probabilistic genotyping software GenoProof® Mixture 3
- Research Article
5
- 10.2139/ssrn.3727899
- Jan 1, 2020
- SSRN Electronic Journal
The evolving field of machine learning and artificial intelligence is frequently presented as a positively disruptive branch of data science whose expansion allows for improvements in the speed, efficiency, and reliability of decision-making, and whose potential is impacting across diverse zones of human activity. A particular focus for development is within the criminal justice sector, and more particularly the field of international criminal justice, where AI is presented as a means to filter evidence from digital media, to perform visual analyses of satellite data, or to conduct textual analyses of judicial reporting datasets. Nonetheless, for all of its myriad potentials, the deployment of forensic machine learning and AI may also generate seemingly insoluble challenges. The critical discourse attendant upon the expansion of automated decision-making, and its social and legal consequences, resolves around two interpenetrating issues; specifically, algorithmic bias, and algorithmic opacity, the latter phenomena being the focus of this study. It is posited that the seemingly intractable evidential challenges associated with the introduction of opaque computational machine learning algorithms, though global in nature, are neither novel nor unfamiliar. Indeed, throughout the past decade and across a multitude of jurisdictions, criminal justice systems have been required to respond to the implementation of opaque forensic algorithms, particularly in relation to complex DNA mixture analysis. Therefore, with the objective of highlighting the potential avenues of challenge which may follow from the introduction of forensic AI, this study focusses on the prior experience of litigating, and regulating, probabilistic genotyping algorithms within the forensic science and criminal justice fields. Crucially, the study proposes that machine learning opacity constitutes an enhanced form of algorithmic opacity. Therefore, the challenges to rational fact-finding generated through the use of probabilistic genotyping software may be encountered anew, and exacerbated, through the introduction of forensic AI. In anticipating these challenges, the paper explores the distinct categories of opacity, and suggests collaborative solutions which may empower contemporary legal academics – and both legal and forensic practitioners - to set more rigorous and usable standards. The paper concludes by considering the ways in which academics, forensic scientists, and legal practitioners, particularly those working in the field of international criminal justice, might re-conceptualize these opaque technologies, opening a new field of critique and analysis. Using findings from case analyses, overarching regulatory guidance, and data drawn from empirical research interviews, this article addresses the validity, transparency, and interpretability problems, leading to a comprehensive assessment of the current challenges facing the introduction of forensic AI. It builds upon work undertaken at the Nuffield Council on Bioethics Horizon Scanning Workshop: The future of science in crime and security (5th July 2019, London).
- Research Article
9
- 10.1111/1556-4029.15150
- Oct 1, 2022
- Journal of Forensic Sciences
Since Y-STR typing only amplifies male Y chromosomal DNA, it can simplify the interpretation of some DNA mixtures that contain female DNA. However, if there are multiple male contributors, mixed Y-STR DNA profiles will often be obtained. Y-STR mixture analysis cases are particularly challenging though as, currently, there are no validated probabilistic genotyping (PG) software solutions commercially available to aid in their interpretation. One approach to fully deconvoluting these challenging mixtures into their individual donors is to conduct single-cell genotyping by isolating individual cells from a mixture prior to conducting DNA typing. In this work, a physical micromanipulation technique involving a tungsten needle and direct PCR with decreased reaction volume and increased cycle number was applied to equimolar 2- and 3-person buccal cell male DNA mixtures and a mock touch DNA case scenario involving the consecutive firing of a handgun by two males. A consensus DNA profiling approach was then utilized to obtain YFiler™ Plus Y-STR haplotypes. Buccal cells were used to optimize and test the direct single-cell subsampling approach, and 2-3 person male buccal cell mixtures were fully deconvoluted into their individual donor Y-STR haplotypes. Single-cell (or agglomerated cell clump) subsampling from the gun's trigger recovered single-source Y-STR profiles from both individuals who fired the gun, the owner, and the other unrelated male. Only the non-owner's DNA was found in the cells recovered from the handle. In summary, direct single-cell subsampling as described represents a potential simple way to analyze and interpret Y-STR mixtures.
- Research Article
2
- 10.1007/s11306-019-1603-5
- Oct 1, 2019
- Metabolomics : Official journal of the Metabolomic Society
Mass spectrometric data analysis of complex biological mixtures can be a challenge due to its vast datasets. There is lack of data treatment pipelines to analyze chemical signals versus noise. These tasks, so far, have been up to the discretion of the analysts. The aim of this work is to demonstrate an analytical workflow that would enhance the confidence in metabolomics before answering biological questions by serial dilution of botanical complex mixture and high-dimensional data analysis. Furthermore, we would like to provide an alternative approach to a univariate p-value cutoff from t-test for blank subtraction procedure between negative control and biological samples. A serial dilution of complex mixture analysis under electrospray ionization was proposed to study firsthand chemical complexity of metabolomics. Advanced statistical models using high-dimensional penalized regression were employed to study both the concentration and ion intensity relationship and the ion-ion relationship per second of retention time sub dataset. The multivariate analysis was carried out with a tool built in-house, so called metabolite ions extraction and visualization, which was implemented in R environment. A test case of the medicinal plant goldenseal (Hydrastis canandensis L.), showed an increase in metabolome coverage of features deemed as "important" by a multivariate analysis compared to features deemed as "significant" by a univariate t-test. For an illustration, the data analysis workflow suggested an unexpected putative compound, 20-hydroxyecdysone. This suggestion was confirmed with MS/MS acquisition and literature search. The multivariate analytical workflow selects "true" metabolite ions signals and provides an alternative approach to a univariate p-value cutoff from t-test, thus enhancing the data analysis process of metabolomics.
- Research Article
1
- 10.30744/brjac.2179-3425.tn-137-2024
- Feb 21, 2025
- Brazilian Journal of Analytical Chemistry
Explosives are widely utilized across various legal activities, including military operations, law enforcement, mining, and construction. Unfortunately, they are also involved in illegal acts such as terrorism, robbery and vandalism. The analysis of explosives and post-explosion residues plays a crucial role in forensic chemistry, aiding investigations into explosive-related crimes. This analysis aims to identify the explosive involved in criminal action. It aids in determining the cause of the explosion, assessing potential clandestine facilities, establishing authorship, and identifying trends for criminal purposes. One type of explosive widely used, not only in Brazil but also worldwide, is fuel-oxidizer mixtures, such as black powder, explosive emulsions, and mixtures based on chlorate and/or perchlorate salts. These explosives are composed of substances with significantly different chemical natures. As a result, their complete identification generally requires the application of various analytical techniques. For this purpose, in addition to direct analyses of the mixtures, they often involve procedures for separating their components and analyzing each of them using appropriate analytical techniques. This separation is typically achieved through simple solvent extractions, often using water and organic solvents. The aqueous and organic extracts, along with any insoluble fractions, are then analyzed, as outlined in this study. This work presents a simple workflow for sample preparation and analysis of bulk fuel-oxidizer explosive mixtures, which are frequently encountered in criminal activities globally, including in Brazil. The workflow employs various analytical techniques, including Gas Chromatography-Mass Spectrometry (GC-MS), Fourier Transform Infrared Spectroscopy (FTIR), Raman Spectroscopy, Ion Chromatography (IC), Laser-Induced Breakdown Spectroscopy (LIBS), Scanning Electron Microscopy with Energy-Dispersive X-ray Spectroscopy (SEM/EDS), and X-ray Diffraction (XRD). This procedure provides a practical guide for forensic laboratories, enhancing their ability to analyze commonly encountered explosive samples with precision and reliability.
- Research Article
- 10.1289/ehp16791
- Jun 17, 2025
- Environmental health perspectives
Human exposure to complex, changing, and variably correlated mixtures of environmental chemicals has presented analytical challenges to epidemiologists and human health researchers. There has been a wide variety of recent advances in statistical methods for analyzing mixtures data, with most methods having open-source software for implementation. However, there is no one-size-fits-all method for analyzing mixtures data given the considerable heterogeneity in scientific focus and study design. For example, some methods focus on predicting the overall health effect of a mixture and others seek to disentangle main effects and pairwise interactions. Some methods are only appropriate for cross-sectional designs, while other methods can accommodate longitudinally measured exposures or outcomes. This article focuses on simplifying the task of identifying which methods are most appropriate to a particular study design, data type, and scientific focus. We present an organized workflow for statistical analysis considerations in environmental mixtures data and two example applications implementing the workflow. This systematic strategy builds on epidemiological and statistical principles, considering specific nuances for the mixtures' context. We also present an accompanying methods repository to increase awareness of and inform application of existing methods and new methods as they are developed. We note several methods may be equally appropriate for a specific context. This article does not present a comparison or contrast of methods or recommend one method over another. Rather, the presented workflow can be used to identify a set of methods that are appropriate for a given application. Accordingly, this effort will inform application, educate researchers (e.g., new researchers or trainees), and identify research gaps in statistical methods for environmental mixtures that warrant further development. https://doi.org/10.1289/EHP16791.
- Research Article
4
- 10.1038/s41596-024-01091-y
- Jan 2, 2025
- Nature protocols
Individual ion mass spectrometry (I2MS) is the Orbitrap-based extension of the niche mass spectrometry technique known as charge detection mass spectrometry (CDMS). While traditional CDMS analysis is performed on in-house-built instruments such as the electrostatic linear ion trap, I2MS extends CDMS analysis to Orbitrap analyzers, allowing charge detection analysis to be available to the scientific community at large. I2MS simultaneously measures the mass-to-charge ratios (m/z) and charges (z) of hundreds to thousands of individual ions within one acquisition event, creating a spectral output directly into the mass domain without the need for further spectral deconvolution. A mass distribution or 'profile' can be created for any desired sample regardless of composition or heterogeneity. To assist in reducing I2MS analysis to practice, we developed this workflow for data acquisition and subsequent data analysis, which includes (i) protein sample preparation, (ii) attenuation of ion signals to obtain individual ions, (iii) the creation of a charge-calibration curve from standard proteins with known charge states and finally (iv) producing a meaningful mass spectral output from a complex or unknown sample by using the STORIboard software. This protocol is suitable for users with prior experience in mass spectrometry and bioanalytical chemistry. First, the analysis of protein standards in native and denaturing mode is presented, setting the foundation for the analysis of complex mixtures that are intractable via traditional mass spectrometry techniques. Examples of complex mixtures included here demonstrate the relevant analysis of an intact human monoclonal antibody and its intricate glycosylation patterns.
- Research Article
1
- 10.1007/978-1-0716-1178-4_10
- Jan 1, 2021
- Methods in molecular biology (Clifton, N.J.)
Metaproteomics of host-microbiome interfaces comprises the analysis of complex mixtures of bacteria, archaea, fungi, and viruses in combination with its host cells. Microbial niches can be found all over the host including the skin, oral cavity, and the intestine and are considered to be essential for the homeostasis. The complex interactions between the host and diverse commensal microbiota are poorly characterized while of great interest as dysbiosis is associated with the development of various inflammatory and metabolic diseases. The metaproteomics workflows to study these interfaces are currently being established, and many challenges remain. The major challenge is the large diversity in species composition that make up the microbiota, which results in complex samples that require extended mass spectrometry analysis time. In addition, current database search strategies are not developed to the size of the search space required for unbiased microbial protein identification.Here, we describe a workflow for the proteomics analysis of microbial niches with a focus on intestinal mucus layer. We will cover step-by-step the sample collection, sample preparation, liquid chromatography-mass spectrometry, and data analysis.
- Research Article
25
- 10.1021/ac400851x
- May 16, 2013
- Analytical Chemistry
Despite tremendous inroads in the development of more sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) strategies for mass spectrometry-based proteomics, there remains a significant need for enhancing the selectivity of MS/MS-based workflows for streamlined analysis of complex biological mixtures. Here, a novel LC-MS/MS platform based on 351 nm ultraviolet photodissociation (UVPD) is presented for the selective analysis of cysteine-peptide subsets in complex protein digests. Cysteine-selective UVPD is mediated through the site-specific conjugation of reduced cysteine residues with a 351 nm active chromogenic Alexa Fluor 350 (AF350) maleimide tag. Only peptides containing the AF350 chromophore undergo photodissociation into extensive arrays of b- and y-type fragment ions, thus providing a facile means for differentiating cysteine-peptide targets from convoluting peptide backgrounds. With the use of this approach in addition to strategic proteolysis, the selective analysis of diagnostic heavy-chain complementarity determining regions (CDRs) of single-chain antibody (scAb) fragments is demonstrated.
- Research Article
1
- 10.1093/jaoacint/qsad096
- Sep 13, 2023
- Journal of AOAC International
In response to the growing global need for pesticide residue testing, laboratories must develop versatile analytical methods and workflows to produce scientifically sound results. One of the many challenges faced by food chemists is acquiring suitable pesticide certified reference materials (CRMs) to calibrate analytical equipment, monitor method performance, and confirm the identity and concentration of hundreds of pesticide residues in food samples. CRM producers invest considerable resources to ensure the stability of their products. To present proper CRM handling and storage practices as guidance to ensure stability based on the results of several multiresidue pesticide stability studies. The open ampoule and combined multiresidue mix studies were conducted under controlled conditions. New ampoules containing multiresidue pesticide CRM mixtures were opened and compared to previously opened ampoules at multiple intervals while stored under freezing and refrigerated temperatures. Both LC- and GC-amenable pesticides (>200 residues) were combined and stored under typical laboratory conditions. Studies were performed with and without celery matrix. The open ampoule study showed high levels of stability for all mixtures. All GC residues remained stable over the duration of the experiment. A week after opening LC multiresidue pesticide mixtures showed minor degradation. After combination of the multiresidue pesticide mixtures, degradation occurred rapidly for both the GC and LC mixtures. Multiresidue pesticide mixtures are stable as ampullated until they are opened. Once the contents of a kit were opened and combined, decreasing stability was observed over time. This was true for both the LC and GC kits. Working mixtures of CRMs for instrument calibration should be made daily. This article shows a novel approach for measuring stability of CRM mixes. In-depth analysis of multiresidue pesticide mixtures and the stability that can be expected before and after mixing under typical storage conditions is described.
- Research Article
6
- 10.1016/j.fsigen.2023.102908
- Jun 22, 2023
- Forensic Science International: Genetics
Carrying out common DNA donor analysis using DBLR™ on two or five-cell mini-mixture subsamples for improved discrimination power in complex DNA mixtures
- Research Article
- 10.1016/j.fsisyn.2025.100581
- Jun 1, 2025
- Forensic science international. Synergy
- Research Article
1
- 10.1016/j.fsisyn.2025.100575
- Jun 1, 2025
- Forensic science international. Synergy
- Research Article
1
- 10.1016/j.fsisyn.2024.100569
- Jun 1, 2025
- Forensic science international. Synergy
- Research Article
- 10.1016/j.fsisyn.2025.100586
- Jun 1, 2025
- Forensic science international. Synergy
- Research Article
- 10.1016/j.fsisyn.2025.100588
- Jun 1, 2025
- Forensic science international. Synergy
- Research Article
- 10.1016/j.fsisyn.2025.100574
- Jun 1, 2025
- Forensic science international. Synergy
- Research Article
3
- 10.1016/j.fsisyn.2024.100571
- Jun 1, 2025
- Forensic science international. Synergy
- Front Matter
- 10.1016/j.fsisyn.2025.100589
- Jun 1, 2025
- Forensic science international. Synergy
- Front Matter
1
- 10.1016/j.fsisyn.2025.100573
- Jun 1, 2025
- Forensic science international. Synergy
- Research Article
2
- 10.1016/j.fsisyn.2024.100566
- Jun 1, 2025
- Forensic science international. Synergy
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.