Abstract

Approximate computation is a central concept in algorithms and computation theory. Our notion of approximation is that the algorithm performs correctly on most of the inputs. We propose some finite automata models to study the question of how well a finite automaton can approximately recognize a non-regular language. On the one hand, we show that there are natural problems for which a DFA can correctly solve almost all the instances, but not all instances. An example of such a problem is a decision question about the number of digits in the square of a given integer. On the other hand, we show that some languages (such as L majority = {x ∈ (0 + 1)a | x has more 1′s than 0′s }) can't be approximated by any regular language in a strong sense. We also show that there are problems that are intermediate (between the extremes stated above) in terms of how we well a regular language can approximate it. An example of such a problem is a decision question about the number of digits in the product of two integers. We also present results comparing different models of approximation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.