Abstract

The volume of Big data is the primary challenge faced by today’s electronic world. Compressing data should be an important aspect of the huge volume to improve the overall performance of the Big data management system and Big data analytics. There is a quiet few compression methods that can reduce the cost of data management and data transfer and improve efficiency of data analysis. Adaptive data compression approach finds out the suitable data compression technique and the location of the data compression. De-duplication removes duplicate data from the Big data store. Resemblance detection and elimination algorithm uses two techniques namely, Dup-Adj and improved super-feature approach. Using them the similar data chunks are separated from non-similar data chunks. The Delta compression is also used to compress the data before storage. The general compression algorithms are computationally complex and also degrade the application response time. To address this application-specific ZIP-IO framework for FPGA accelerated compression is studied. In this framework a simple instruction trace entropy compression algorithm is implemented in FPGA substrate. The Record-aware Compression (RaC) technique guarantees that the splitting of compressed data blocks does not contain partial records in the data blocks and it is implemented in Hadoop MapReduce.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.