Increased attention to the development of modern approaches to probabilistic evaluation of research results in the field of forensic science is attributed to the trends of critical analysis of the current state of forensic science and requirements for clear characteristics of limitations of research results, including indicators of uncertainty of the data obtained and associated estimated probabilities. One of the fundamental provisions of the modern theory of evaluating judicial evidence, along with the admissibility and desirability of using probabilities, is the principle of comparing these probabilities in the light of their conditionality by competing versions arising from the adversarial nature of justice. In this regard, the purpose of this article is to develop methodological approaches to the use of the likelihood ratio as the most appropriate form of determining the significance of conclusions sent by an expert to the court for the formation of evidence. The empirical basis of the article is based on a brief review of publications from 2000 to 2018, devoted to the application of the concept of the likelihood ratio in forensic activities. According to many scientists, the use of this concept can provide a real assessment of the reliability of the evidence. In legal proceedings, evidence is generally understood as information about facts obtained in accordance with the procedure provided for by law, on the basis of which the presence or absence of circumstances that are important for the proper consideration and resolution of the case is established. In this publication, the term «evidence» is considered through an expert-technological prism and is presented as various quantitative continuous measurements (properties and characteristics of objects of forensic expertise), which are used when comparing a known and questioned sample to solve the question of their origin from one or from different sources. The article discusses the most common normal distribution of continuous data and a general approach to calculating the likelihood ratio (LR) using probability density functions (pdf). It is shown that in order to account for the variability of compared samples, three databases are required for calculatingLR: a potential database, a control database of a known sample, and a comparative database of a questioned sample. Examples of calculating theLRand strength of evidence performed for various types of examinations are given. The procedures for calculatingLRare generally the same, but the authors suggest different techniques to calculate and graphically represent the strength of the proof. In more detail, the publications present the so-called value of the cost or penalty for an incorrect forecast (ClLR), introduced the terms of trueness and reproducibility, as well as the confidence interval of this value. The article highlights a number of features of calculatingLRfor multidimensional continuous data. Of great interest is the use of the speaker model in sound recording expertise in the form of a weighted sum of Gaussian densitiesMcomponents (Gaussian mixture models —GMM). Each density component in this sum is aD-dimensional Gaussianpdfwith an average vector value and a covariance matrix. It can be assumed that the use ofGMM-pdfinLRcalculations is effective not only for forensic examination of speaker recognition, but also for other types of examinations. The universality of assessing the similarity/difference of objects of forensic research using the likelihood ratio indicates the prospects for applying the concept.