Abstract

Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words and propositions drawn from a natural language, e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there will be a significant increase in the price of oil in the near future, etc. Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech and summarizing a story. Underlying this remarkable capability is the brain’s crucial ability to manipulate perceptions—perceptions of distance, size, weight, color, speed, time, direction, force, number, truth, likelihood and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing with words provides a foundation for a computational theory of perceptions—a theory which may have an important bearing on how humans make—and machines might make—perception-based rational decisions in an environment of imprecision, uncertainty and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp whereas perceptions are fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date the age of rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements and outright failures. We cannot build robots which can move with the agility of animals or humans; we cannot automate driving in heavy traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs which can summarize non-trivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks. It may be argued that underlying the underachivements and failures is the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology—referred to as a computational theory of perceptions—is presented in this paper. The computational theory of perceptions, or CTP for short, is based on the methodology of computing with words (CW). In CTP, words play the role of labels of perceptions and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, X isr R, where X is the constrained variable, R is the constraining relation and isr is a variable copula in which r is a variable whose value defines the way in which R constrains X. Among the basic types of constraints are: possibilistic, veristic, probabilistic, random set, Pawlak singing then the emphasis is put on the action aspect, while if we want to say that the singing is loud then the emphasis is on the sound, which is treated as a thing since one hears it. The crucial point is that one seems to be forced to make such a distinction, as assists the determination of structure, but the origin of this distinction is probably related to the different ways actions and objects are represented in the brain generally. Here the relevant tool (for detecting groups) is one which takes note of which areas of the brain are active, and which in creating an agent from a group tries to respect existing patterns. The general pattern in the above has been the same as other instances that have been discussed: specific tools lead to the paradigms of activity being gradually extended. Certain characteristics of the resulting agents make this activity tend to the useful; thus the tools have a certain potential that can be fruitfully realised. As agents accumulate, the activity that they cooperate in becomes more and more complex, but the vetting of new additions to the system and of the overall activity of the system ensures that it remains useful and in control (ideally, of course; we know that in human societies, such regulatory activity does not always work very well).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call