Dynamic signatures: a mathematical approach to analysis.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

This study evaluates mathematical tools (principal component analysis, dynamic time warping, and the Kolmogorov-Smirnov hypothesis test) to analyse global and local data from dynamic signatures to reduce subjectivity and increase the reproducibility of handwriting examination using a two-step approach. A dataset composed of 1 800 genuine signature samples, 870 simulated signatures, and 60 disguises (30 formally similar or "autosimulated" and 30 random but different from usual) provided by 30 volunteers was collected. The first step involved global data analysis using principal component analysis and a hypothesis test performed for 62 global characteristics, and associations of these characteristics were analysed through calculations of multivariate distance followed by a hypothesis test. The second step involved the analysis of local characteristics including vertical and horizontal positions, speed, pressure gradient, acceleration, and jerk point-to-point, by using dynamic time warping followed by a hypothesis test. Optimization of sensitivity and specificity metrics of the hypothesis test was explored by varying its stringency and observing accuracy rates for the simulated and genuine groups. A P-value threshold of 1 × 10-10 was found to be optimal, making the test more restrictive and yielding accuracy rates of 96.7% for genuine global data and 88.9% for simulated data. The same cut-off value for local characteristics provided an average accuracy rate of 95.4% for genuine samples and 94.7% for simulated samples, demonstrating high accuracy for both simulated and genuine samples. However, the method did not offer reasonable accuracy rates for disguises, consistent with observations in traditional handwriting examination. Our approach provided satisfactory results for forensic examination use. The visualization of graphs and signatures and analysis of all identifying elements of handwriting by the examining expert are still essential. In future studies, we plan to perform blind tests to validate our approach and propose a rigorous methodology.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.14710/geoplanning.4.1.19-26
COORDINATE TRANSFORMATION USING FEATHERSTONE AND VANÍČEK PROPOSED APPROACH - A CASE STUDY OF GHANA GEODETIC REFERENCE NETWORK
  • Sep 29, 2016
  • Geoplanning: Journal of Geomatics and Planning
  • Yao Yevenyo Ziggah + 3 more

Most developing countries like Ghana are yet to adopt the geocentric datum for its surveying and mapping purposes. It is well known and documented that non-geocentric datums based on its establishment have more distortions in height compared with satellite datums. Most authors have argued that combining such height with horizontal positions (latitude and longitude) in the transformation process could introduce unwanted distortions to the network. This is because the local geodetic height in most cases is assumed to be determined to a lower accuracy compared with the horizontal positions. In the light of this, a transformation model was proposed by Featherstone and Vaníček (1999) which avoids the use of height in both global and local datums in coordinate transformation. It was confirmed that adopting such a method reduces the effect of distortions caused by geodetic height on the transformation parameters estimated. Therefore, this paper applied Featherstone and Vaníček (FV) model for the first time to a set of common points coordinates in Ghana geodetic reference network. The FV model was used to transform coordinates from global datum (WGS84) to local datum (Accra datum). The results obtained based on the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) in both Eastings and Northings were satisfactory. Thus, a RMSE value of 0.66 m and 0.96 m were obtained for the Eastings and Northings while 0.76 m and 0.73 m were the MAE values achieved. Also, the FV model attained a transformation accuracy of 0.49 m. Hence, this study will serve as a preliminary investigation in avoiding the use of height in coordinate transformation within Ghana’s geodetic reference network.

  • Conference Article
  • Cite Count Icon 36
  • 10.1109/ccst.2007.4373463
On-Line Signature Verification by Dynamic Time Warping and Gaussian Mixture Models
  • Oct 1, 2007
  • Oscar Miguel-Hurtado + 3 more

Handwriting signature is the most diffuse mean for personal identification. Lots of works have been carried out to get reasonable errors rates within automatic signature verification on-line. Most of the algorithms that have been used for matching work by features extraction. This paper deals with the analysis of discriminative powers of the features that can be extracted from an on-line signature, how it's possible to increase those discriminative powers by dynamic time warping as a step in the preprocessing of the signal coming from the tablet. Also it will be covered the influence of this new step in the performance of the Gaussian mixture models algorithm, which has been shown as a successfully algorithm for on-line automatic signature verification in recent studies. A complete experimental evaluation of the algorithm base on dynamic time warping and Gaussian Mixture Models has been conducted on 2500 genuine signatures samples and 2500 skilled forgery samples from 100 users. Those samples are included at the public access MCyT-Signature-Corpus Database.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 18
  • 10.5194/gmd-8-295-2015
Multi-site evaluation of the JULES land surface model using global and local data
  • Feb 13, 2015
  • Geoscientific Model Development
  • D Slevin + 2 more

Abstract. This study evaluates the ability of the JULES land surface model (LSM) to simulate photosynthesis using local and global data sets at 12 FLUXNET sites. Model parameters include site-specific (local) values for each flux tower site and the default parameters used in the Hadley Centre Global Environmental Model (HadGEM) climate model. Firstly, gross primary productivity (GPP) estimates from driving JULES with data derived from local site measurements were compared to observations from the FLUXNET network. When using local data, the model is biased with total annual GPP underestimated by 16% across all sites compared to observations. Secondly, GPP estimates from driving JULES with data derived from global parameter and atmospheric reanalysis (on scales of 100 km or so) were compared to FLUXNET observations. It was found that model performance decreases further, with total annual GPP underestimated by 30% across all sites compared to observations. When JULES was driven using local parameters and global meteorological data, it was shown that global data could be used in place of FLUXNET data with a 7% reduction in total annual simulated GPP. Thirdly, the global meteorological data sets, WFDEI and PRINCETON, were compared to local data to find that the WFDEI data set more closely matches the local meteorological measurements (FLUXNET). Finally, the JULES phenology model was tested by comparing results from simulations using the default phenology model to those forced with the remote sensing product MODIS leaf area index (LAI). Forcing the model with daily satellite LAI results in only small improvements in predicted GPP at a small number of sites, compared to using the default phenology model.

  • Research Article
  • Cite Count Icon 21
  • 10.1063/5.0161471
Comparison and evaluation of dimensionality reduction techniques for the numerical simulations of unsteady cavitation
  • Jul 1, 2023
  • Physics of Fluids
  • Guiyong Zhang + 4 more

In the field of fluid mechanics, dimensionality reduction (DR) is widely used for feature extraction and information simplification of high-dimensional spatiotemporal data. It is well known that nonlinear DR techniques outperform linear methods, and this conclusion may have reached a consensus in the field of fluid mechanics. However, this conclusion is derived from an incomplete evaluation of the DR techniques. In this paper, we propose a more comprehensive evaluation system for DR methods and compare and evaluate the performance differences of three DR methods: principal component analysis (PCA), isometric mapping (isomap), and independent component analysis (ICA), when applied to cavitation flow fields. The numerical results of the cavitation flow are obtained by solving the compressible homogeneous mixture model. First, three different error metrics are used to comprehensively evaluate reconstruction errors. Isomap significantly improves the preservation of nonlinear information and retains the most information with the fewest modes. Second, Pearson correlation can be used to measure the overall structural characteristics of the data, while dynamic time warping cannot. PCA performs the best in preserving the overall data characteristics. In addition, based on the uniform sampling-based K-means clustering proposed in this paper, it becomes possible to evaluate the local structural characteristics of the data using clustering similarity. PCA still demonstrates better capability in preserving local data structures. Finally, flow patterns are used to evaluate the recognition performance of flow features. PCA focuses more on identifying the major information in the flow field, while isomap emphasizes identifying more nonlinear information. ICA can mathematically obtain more meaningful independent patterns. In conclusion, each DR algorithm has its own strengths and limitations. Improving evaluation methods to help select the most suitable DR algorithm is more meaningful.

  • Research Article
  • Cite Count Icon 129
  • 10.1016/j.eswa.2012.05.012
Correlation based dynamic time warping of multivariate time series
  • May 16, 2012
  • Expert Systems with Applications
  • Zoltán Bankó + 1 more

Correlation based dynamic time warping of multivariate time series

  • Research Article
  • Cite Count Icon 55
  • 10.1021/jf402280y
Authentication of Monofloral Yemeni Sidr Honey Using Ultraviolet Spectroscopy and Chemometric Analysis
  • Aug 5, 2013
  • Journal of Agricultural and Food Chemistry
  • Abdul-Rahman A Roshan + 5 more

This work describes a simple model developed for the authentication of monofloral Yemeni Sidr honey using UV spectroscopy together with chemometric techniques of hierarchical cluster analysis (HCA), principal component analysis (PCA), and soft independent modeling of class analogy (SIMCA). The model was constructed using 13 genuine Sidr honey samples and challenged with 25 honey samples of different botanical origins. HCA and PCA were successfully able to present a preliminary clustering pattern to segregate the genuine Sidr samples from the lower priced local polyfloral and non-Sidr samples. The SIMCA model presented a clear demarcation of the samples and was used to identify genuine Sidr honey samples as well as detect admixture with lower priced polyfloral honey by detection limits >10%. The constructed model presents a simple and efficient method of analysis and may serve as a basis for the authentication of other honey types worldwide.

  • Research Article
  • Cite Count Icon 6
  • 10.1093/chromsci/bmu161
The Application of Dynamic Time Warping to the Quality Evaluation of Radix Puerariae thomsonii: Correcting Retention Time Shift in the Chromatographic Fingerprints.
  • Nov 28, 2014
  • Journal of Chromatographic Science
  • L Jiao + 4 more

The application of dynamic time warping (DTW) to the correction of retention time shift in chromatographic fingerprints of Radix Puerariae thomsonii (RPT) was studied. The fingerprints of 27 RPT samples were established with their entire chromatograms. Because there is retention time shift in the obtained fingerprints, the quality of these samples cannot be correctly evaluated by applying similarity estimation and principal component analysis (PCA) to the unaligned fingerprints. Hence, the fingerprints were aligned by using DTW method. After alignment, the retention time shift was corrected satisfactorily and the quality of these RPT samples was correctly evaluated. It is demonstrated that DTW is a practical method for aligning the chromatographic fingerprints of RPT samples. The combination of similarity estimation, PCA and DTW is shown to be a promising method for evaluating the quality of herbal medicines.

  • Conference Article
  • Cite Count Icon 2
  • 10.5555/1322109.1322127
Locality management using multiple SPMs on the Multi-Level Computing Architecture
  • Oct 26, 2006
  • Ahmed Abdelkhalek + 1 more

The multi-level computing architecture (MLCA) is a novel system-on-chip architecture for embedded systems designed to exploit task-level and instruction-level parallelism in multimedia applications. The MLCA provides a unique two-level programming model that simplifies the development of embedded applications. To cope with increasing intra-system communication delays, we introduce a distributed memory version of the MLCA where separate storage is used for global and local application data. Global data is stored on multiple on-chip scratch-pad memories (SPMs) with non-uniform-memory access (NUMA) latencies, while local data is stored on PU-private memories. In such designs, one of the key factors affecting application performance is the locality of access to global data. We introduce programming constructs and run-time support to dynamically manage data stored in the SPMs and to influence run-time task scheduling. Collectively, our techniques improve performance by 6%-40%, compared to simple static memory management and scheduling approaches

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/estmed.2006.321276
Locality management using multiple SPMs on the Multi-Level Computing Architecture
  • Oct 1, 2006
  • Ahmed Abdelkhalek + 1 more

The multi-level computing architecture (MLCA) is a novel system-on-chip architecture for embedded systems designed to exploit task-level and instruction-level parallelism in multimedia applications. The MLCA provides a unique two-level programming model that simplifies the development of embedded applications. To cope with increasing intra-system communication delays, we introduce a distributed memory version of the MLCA where separate storage is used for global and local application data. Global data is stored on multiple on-chip scratch-pad memories (SPMs) with non-uniform-memory access (NUMA) latencies, while local data is stored on PU-private memories. In such designs, one of the key factors affecting application performance is the locality of access to global data. We introduce programming constructs and run-time support to dynamically manage data stored in the SPMs and to influence run-time task scheduling. Collectively, our techniques improve performance by 6%-40%, compared to simple static memory management and scheduling approaches

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iciteed.2015.7408915
A rapid motion retrieval technique using simple and discrete representation of motion data
  • Oct 1, 2015
  • Natapon Pantuwong + 2 more

In this paper, we propose a rapid motion retrieval technique using dynamic time warping. The frames of the motions are represented by feature vectors whose elements are integer values. The dimensionality of the feature vectors is reduced by using principal component analysis and the values of vector elements are quantized to two bits. A similarity matrix giving distances between the frames is generated for use by dynamic time warping. Preliminary experiments were conducted to find optimum parameter values by evaluating motion retrieval performance. One important feature of the proposed method is that, if the bit length for the frame representation is fixed, the distance between any two frames in any two motions can be found as an element of the similarity matrix without changing its size, which can achieve rapid motion retrieval via dynamic time warping. Experimental comparison with existing methods demonstrated that our proposed technique can complete retrieval tasks over six times faster than a traditional dynamic time warping method, while achieving almost the same levels of accuracy and computation cost as those for the k-d tree method described in [1]. By using simple and discrete representations of frames, the possibilities of achieving rapid retrieval retaining high retrieval accuracy are explored.

  • Research Article
  • Cite Count Icon 22
  • 10.1007/s12161-020-01941-x
Detection of Plant-Derived Adulterants in Saffron (Crocus sativus L.) by HS-SPME/GC-MS Profiling of Volatiles and Chemometrics
  • Jan 2, 2021
  • Food Analytical Methods
  • Francesca Di Donato + 3 more

Gas chromatography with mass spectrometry detection (GC-MS) coupled with headspace-solid-phase microextraction (HS-SPME) was used to analyse the aroma profile of genuine saffron (Crocus sativus L.) and samples of this spice artificially adulterated with Calendula officinalis L. petals (calendula), Carthamus tinctorius L. petals (safflower) and Curcuma longa L. powdered rhizomes (turmeric). Preliminary analyses of genuine saffron and pure contaminants were performed to select the kind of SPME sorbent. Moreover, an experimental design combined with response surface methodology was applied to optimise the sample temperature and the fibre exposure time with the aim of enhancing the detection of the above adulterants in counterfeited saffron samples. The GC-MS chromatograms collected under the optimised conditions were finally handled by unsupervised and supervised multivariate statistical methods to differentiate the genuine saffron samples produced in three different Italian regions from artificially adulterated samples at 2–5% w/w contamination levels. Thirty genuine and 30 counterfeited (10 for each kind of adulterant) saffron samples were analysed. Principal component analysis was applied to assist the choice of the GC/MS data pre-treatment and classification of genuine and adulterated saffron samples was attempted by partial least square-discriminant analysis (PLS-DA). Predictive performance of PLS-DA models calibrated with 42 samples was finally tested on 18 saffron samples (9 genuine and 9 adulterated). All the external saffron samples were correctly classified regardless of the kind of contaminant, while in calibration, only a saffron sample contaminated with safflower was erroneously assigned to the group of genuine ones. Class modelling of genuine saffron performed by SIMCA (Soft Independent Model Class Analogy) exhibited a good sensitivity and 100% specificity for external adulterated samples.

  • Research Article
  • 10.1088/1748-9326/adf127
A review of open data for studying global groundwater in social–ecological systems
  • Aug 5, 2025
  • Environmental Research Letters
  • Xander Huggins + 25 more

Global data have served an integral role in characterizing large-scale groundwater systems, identifying their sustainability challenges, and informing on socioeconomic and ecological dimensions of groundwater. These insights have revealed groundwater as a dynamic component of the water cycle and social–ecological systems, leading to an expansion in groundwater science that increasingly focuses on groundwater’s interactions with ecological, socioeconomic, and Earth systems. This shift presents many opportunities that are conditional on broader, more interdisciplinary system conceptualizations, models, and methods that require the integration of a greater diversity of data in contrast to conventional hydrogeological investigations. Here, we catalogue 144 global open access datasets and dataset collections relevant to groundwater science that span elements of the hydrosphere, biosphere, atmosphere, lithosphere, food systems, governance, management, and other socioeconomic system dimensions. The assembled catalogue offers a reference of available data for use in interdisciplinary assessments, and we summarize these data across their primary system, spatial resolution, temporal range, data type, generation method, level of groundwater representation, and institutional location of lead authorship. The catalogue includes 15 groundwater datasets, 23 datasets derived in relation to groundwater, and 106 datasets associated with groundwater. We find the majority of datasets are temporally static and that temporally dynamic data peak in availability during the 2000–2010 decade. Only a small fraction of temporally dynamic data is derived with any direct representation of groundwater, highlighting the need for greater incorporation of groundwater in Earth system models and data collection initiatives across socioeconomic, governance, and environmental science research communities. A small number of countries, led by the USA, Germany, the Netherlands, and Canada, generate most global groundwater data, reflecting a global North bias in the institutional leadership of these data generation activities. We raise three priority themes for future global groundwater data initiatives, which include: data improvements through prioritizing observed and temporally dynamic data; elevating regional and local scale data and perspectives to address challenges relating to equity and bias; and advancing data sharing initiatives founded on reciprocal benefits between global initiatives and data providers.

  • Research Article
  • Cite Count Icon 25
  • 10.1007/s00500-017-2782-5
Quantifying dynamic time warping distance using probabilistic model in verification of dynamic signatures
  • Aug 19, 2017
  • Soft Computing
  • Rami Al-Hmouz + 4 more

One of the multimodal biometric scenarios is realized by considering several features coming from a single biometric entity. Dynamic signature verification has been utilized considering such scenarios. We present a new approach, namely probabilistic dynamic time warping, to verify dynamic signatures where we use dynamic time warping in realizing distance determination in the verification process. Signatures are segmented into several segments, where probability of each segment is quantified with the aid of a relative distance associated with two selected threshold levels. The final decision is achieved by combining all segment probabilities using a Bayes rule. Experiments demonstrate improvement of equal error rate for the proposed approach for the random forgery. The method has been tested on synthetic dataset and two publicly available databases of dynamic signatures, namely SCV2004 and MCYT100.

  • Research Article
  • Cite Count Icon 11
  • 10.1103/physrevd.99.123012
Reconstructing gravitational wave core-collapse supernova signals with dynamic time warping
  • Jun 17, 2019
  • Physical Review D
  • Sofia Suvorova + 2 more

Core-collapse supernovae (CCSNe) are a potential source for ground-based gravitational wave detectors, as their predicted emission peaks in the detectors' frequency band. Typical searches for gravitational wave bursts reconstruct signals using wavelets. However, as CCSN signals contain multiple complex features in the time-frequency domain, these techniques often struggle to reconstruct the entire signal. An alternative method developed in recent years involves applying principal component analysis (PCA) to a set of simulated CCSN models. This technique enables model selection between astrophysical CCSN models as well as waveform reconstruction. However, PCA faces its own difficulties, such as being unable to reconstruct signals longer than the simulations; many CCSN simulations are stopped before the emission peaks due to insufficient computational resources. In this study, we show how combining PCA with dynamic time warping (DTW) improves the reconstruction of CCSN gravitational wave signals in Gaussian noise characteristic of Advanced LIGO at design sensitivity. For the waveforms used in this analysis, we find that the number of PCs needed to represent 90% of the data is reduced from nine to four by applying DTW, and that the match between the original and reconstructed waveforms improves for signal-to-noise ratios in the range [0,50].

  • Conference Article
  • 10.1109/sips.2015.7344993
Fast dynamic time warping using low-rank matrix approximations
  • Oct 1, 2015
  • Mrugesh Gajjar + 1 more

Dynamic Time Warping (DTW) is a computationally intensive algorithm and computation of a local (Euclidean) distance matrix between two signals constitute the majority of the running time of DTW. In earlier work, a matrix multiplication based formulation for computing the Euclidean distance matrix was proposed leading to faster DTW. In this work, we propose the use of (i) direct low-rank factorization of template matrix and (ii) optimal low-rank factorization of the Euclidean distance matrix, leading to further computational speedup without significantly affecting accuracy. We derive a condition for achieving computational savings using low-rank factorizations. We analyze the condition for achieving computational savings over these low-rank factorizations by using separate low-rank factors for each class of templates to further reduce the rank within each class. We show that, using per-class factors, we can achieve significant average rank reduction with further computational savings for applications with high inter class variability in the feature space. We observe that our low-rank factorization methods result in lesser errors in the Euclidean distance matrix compared to Principal Component Analysis (PCA). We analyze the error behavior using spoken digit and heart auscultation sound datasets and achieve speedups of upto 4.59x for spoken digit recognition task on a dual core mobile CPU.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.