Trajectory based sign language video retrieval using revised string edit distance
In this paper, we present a revised method to compute the similarity of traditional string edit distance. Given two strings X and Y over a finite alphabet, an edit distance between X and Y can be defined as the minimum weight of transforming X into Y through a sequence of weighted edit operations. Because this method lacks some types of normalization, it would bring some computation errors when the sizes of the strings that are compared are variable. In order to compute the edit distance, new algorithm is introduced. This algorithm is shown to work in O (m*n*log(n)) time and O(n*m) memory space for strings of lengths m and n. Content-based video retrieval is a challenging field, and most research focus on the low level features such as color histogram, texture and etc. In this paper, we solve the retrieval problem by high level features used by hand language trajectory and compare the similarity by our revised string edit distance algorithms. Trajectory based video retrieval is widely explored in recent years by many excellent researchers. Experiments in trajectory-based sign language video retrieval are presented in our paper at last, revealing that our revised edit distance algorithm consistently provide better results than classical edit distances.
Edit Distance Sign language Content based Video retrieval Hand language Introduction
Shilin Zhang Bo Zhang
Faculty of Computer Science, Network and Information Management Center North China University of Tec High technology & innovation center Institute of Automation Chinese Academy of Science Beijing, Chin
国际会议
南京
英文
17-22
2010-11-01(万方平台首次上网日期,不代表论文的发表时间)