Rough set theory is a relatively new mathematical tool for use in computer applications in circumstances which are characterized by vagueness and uncertainty. The technique called rough analysis can be applied very fruitfully in artificial intelligence and cognitive sciences. Although this methodology has been shown to be successful in dealing with the vagueness of many real-life applications, there are still several theoretical problems to be solved, and we also need to consider practical issues if we want to apply the theory. It is the latter set of issues we address here, in the context of handling and analysing large data sets during the knowledge representation process. Some of the associated problems (for example, the general problem of finding all “keys”) have been shown to be NP-hard. Thus, it is important to seek efficient computational methods for the theory. In rough set theory, a table called an information system or a database relation is used as a special kind of formal language to represent knowledge syntactically. Semantically, knowledge is defined as classifications of information systems. The use of rough analysis does not involve the details of rough set theory directly, but it uses the same basic classification techniques. We discuss computational methods for the rough analysis of databases.
Read full abstract