Abstract

The computationally hard problem of finite language decomposition is investigated. A finite language L is decomposable if there are two languages L1 and L2 such that L = L1L2. Otherwise, L is prime. The main contribution of the paper is an adaptive parallel algorithm for finding all decompositions L1L2 of L. The algorithm is based on an exhaustive search and incorporates several original methods for pruning the search space. Moreover, the algorithm is adaptive since it changes its behavior based on the runtime acquired data related to its performance. Comprehensive computational experiments on more than 4000 benchmark languages generated over alphabets of various sizes have been carried out. The experiments showed that by using the power of parallel computing the decompositions of languages containing more than 200000 words can be found. Decompositions of languages of that size have not been reported in the literature so far.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call