Abstract

Graph searching is one of the simplest and most widely used tools in graph algorithms. Every graph search method is defined using some partic-ular selection rule, and the analysis of the corre-sponding vertex orderings can aid greatly in de-vising algorithms, writing proofs of correctness, or recognition of various graph families. We study graphs where the sets of vertex order-ings produced by two di˙erent search methods coincide. We characterise such graph families for ten pairs from the best-known set of graph searches: Breadth First Search (BFS), Depth First Search (DFS), Lexicographic Breadth First Search (LexBFS) and Lexicographic Depth First Search (LexDFS), and Maximal Neighborhood Search (MNS).

Highlights

  • Introduction artifactsThese artefacts hinder the ability of Convolution Kernel Compensation (CKC) to decompose the high density electromyograms (HDEMGs) into contributions of individualIn humans, movement and locomotion is regulated by muscles

  • We propose a novel approach to artefact detection and elimination in high density electromyograms by using previously introduced Activity Index and Independent Component Analysis (ICA). 28 electromyographic recordings of the biceps brachii muscle were analysed for the presence of artefacts

  • In the HDEMGs used in our study, each artefact was found in a separate ICA component

Read more

Summary

Introduction

Introduction artifactsThese artefacts hinder the ability of CKC to decompose the HDEMG into contributions of individualIn humans, movement and locomotion is regulated by muscles. It is possible to detect the subtle changes in the voltage on the surface of the skin that originate from the muscles, even through the skin and the subcutaneous fat layers. Such recordings are called surface electromyograms (SEMG). Zero-knowledge proofs (ZKPs) are an intriguing cryptographic phenomenon for proving mathematical statements without revealing why they are true, and have the potential to change how our data exists in the digital space. Text similarity is a measure of the degree of content similarity between two textual documents It is used as a metric which assists for plagiarism detection. For further readings on this topic, look at Trudeau and Richard [13], and Barthelemy [1]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call