Abstract

One of the basic characteristics in human problem solving is the ability to conceptualize the world at different granularities and translate from one abstraction level to the others easily. But so far computers can only deal with one abstraction level in problem solving generally. It seems important to develop new techniques which will in some way enable the computers to represent the world at different granularities. So the multi-granular representation is the key to machine intelligence. In the talk, we first introduce the quotient space based problem solving theory. In the theory, a problem is represented by a triplet(X,F,T), where X -the universe with the finest grain-size, F -the attribute of X, and T- the structure of X. When we view the same problem at a coarser grain size, we have a coarse-grained universe denoted by [X]. Then we have a new representation ([X],[F],[T]) of the problem. The coarse universe [X] is defined by an equivalence relation R on X. Then, representation ([X],[F],[T]) is called a quotient space of(X,F,T), where [X] -the quotient set of X, [F] -the quotient attribute of F, and [T] -the quotient structure of T. Obviously, the set of representations of a problem at different granularities composes a complete semi-order lattice. That is, in the theory the concept, quotient space, in algebra is used as a mathematical model to represent the relationship between representations with different grain-sizes. Multi-granular representation methodology can be used both in problem solving and machine learning. In multi-granular problem solving, a problem is solved from the coarse grain-size to the fine one hierarchically. The aim of hierarchical problem solving is intended to reduce the computational complexity. Multi-granular machine learning is intended to learn the knowledge from representations with different grain-size, i.e., the so-called multi-information fusion. Generally speaking, the fine representation has more details but less robustness. Conversely, the coarse representation has more robustness but less expressiveness. They are complement so multi-granular learning can benefit from them. We also present some examples in hierarchical problem solving and machine learning to show the advantages of using multi-granular representation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call