Abstract

In this paper, we present a revised method to compute the similarity of traditional string edit distance. Given two strings X and Y over a finite alphabet, an edit distance between X and Y can be defined as the minimum weight of transforming X into Y through a sequence of weighted edit operations. Because this method lacks some type of normalization, it would bring some computation errors when the sizes of the strings that are compared are variable. In order to compute the edit distance, a new algorithm is introduced. This algorithm is shown to work in O (m*n*log(n)) time and O(n*m) memory space for strings of lengths m and n. Content-based video retrieval is a challenging field, and most research focus on the low level features such as color histogram, texture and etc. In this paper, we solve the retrieval problem by high level features used by hand language trajectory and compare the similarity by our revised string edit distance algorithms. Trajectory based video retrieval is widely explored in recent years by many excellent researchers. Experiments in trajectory-based sign language video retrieval are presented in our paper at last, revealing that our revised edit distance algorithm consistently provide better results than classical edit distances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call