Abstract

This work is about intra-sentence segmentation performed before syntactic analysis of long sentences composed of at least 20 words in an English–Korean machine translation system. A long sentence has been known to spend enormous computational time and space when it is analyzed syntactically. It can also produce poor translation results. To resolve this problem, we partitioned a long sentence into a few segments to analyze each segment separately. To partition the sentence, firstly, we tried to find candidates for each segment position in the sentence. We then generated input vectors representing lexical contexts of the corresponding candidates and also used the support vector machines (SVM) algorithm to learn and recognize the appropriate segment positions. We used three kernel functions, the linear kernel, the polynomial kernel and the Gaussian kernel, to find optimal hyperplanes classifying proper positions and we compared results obtained from each kernel function. As a result of the experiments, we acquired 0.81, 0.83, and 0.79 f-measure values from the linear, polynomial and Gaussian kernel, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call