Abstract

Rough set theory, developed by Z. Pawlak, is a powerful soft computing tool for extracting meaningful patterns from vague, imprecise, inconsistent and large chunk of data. It classifies the given knowledge base approximately into suitable decision classes by removing irrelevant and redundant data using attribute reduction algorithm. Conventional Rough set information processing like discovering data dependencies, data reduction, and approximate set classification involves the use of software running on general purpose processor. Since last decade, researchers have started exploring the feasibility of these algorithms on FPGA. The algorithms implemented on a conventional processor using any standard software routine offers high flexibility but the performance deteriorates while handling larger real time databases. With the tremendous growth in FPGA, a new area of research has boomed up. FPGA offers a promising solution in terms of speed, power and cost and researchers have proved the benefits of mapping rough set algorithms on FGPA. In this paper, a survey on hardware implementation of rough set algorithms by various researchers is elaborated.

Highlights

  • Rough set theory(RST), by Zdzisław Pawlak, is a powerful mathematical tool, for discovering data dependencies by reducing the number of attributes contained in a data set using the data alone, without requiring any further additional information like degree of membership, probability etc. as required in fuzzy or in probability theory[1]

  • A reduct is any minimal subset of condition features, which discerns all pairs with different decision values and is complete if the deletion of any attribute of a reduct will make at least one pair of objects with different decision attribute values indiscernible

  • In order to overcome the problems posed by power and instruction-level parallelism (ILP) wall, the computer industry shifted from single core processors to multiple parallel processing units

Read more

Summary

INTRODUCTION

Rough set theory(RST), by Zdzisław Pawlak, is a powerful mathematical tool, for discovering data dependencies by reducing the number of attributes contained in a data set using the data alone, without requiring any further additional information like degree of membership, probability etc. as required in fuzzy or in probability theory[1]. The rough set approach is easy to understand, offers straightforward interpretation of obtained results, most of its algorithms are suited for parallel processing. It is considered as one of the first non-statistical approach in data analysis [2]. There has been a growing interest amongst researchers in developing a dedicated hardware for RST using FPGAs. The advantage of using a dedicated hardware is huge acceleration in terms of speed as they relieve main processor from the computational overheads. The advantage of using a dedicated hardware is huge acceleration in terms of speed as they relieve main processor from the computational overheads There are several such accelerators already available commercially in markets like Graphics Processing Units (GPUs), Digital Signal Processor (DSP), Fuzzy Processor.

ROUGH SET PRELIMINARIES
Condition Attributes
Boundary Region
Indiscernibility relation
Discernibility Matrix
Reduct and Core
NEED OF HARDWARE ACCELERATORS
Accelerators
CURRENT STATE OF ART
Maciej Kopczynskis et al computation of reduct and core on FPGA
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.