Abstract

We use random context picture grammars to generate pictures through successive refinement. The productions of such a grammar are context-free, but their application is regulated — "permitted" or "forbidden" — by context randomly distributed in the developing picture. Grammars using this relatively weak context often succeed where context-free grammars fail, e.g. in generating the Sierpiński carpets. On the other hand it proved possible to develop iteration theorems for three subclasses of these grammars, namely a pumping–shrinking, a pumping and a shrinking lemma for context-free, random permitting and random forbidding context picture grammars, respectively. Finding necessary conditions is problematic in the case of most models of context-free grammars with context-sensing ability, since they consider a variable and its context as a finite connected array. We have already shown that context-free picture grammars are strictly weaker than both random permitting and random forbidding context picture grammars, also that random permitting context is strictly weaker than random context. We now show that grammars which use forbidding context only are strictly weaker than random context picture grammars.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call