Abstract

In the Big Data community, the “Map/Reduce Paradigm” is one of the key enabling approaches for meeting the continuously increasing demands on computing resources imposed by massive data sets. Today it is implemented in many open source projects. The popularity of Map/Reduce is due to its high scalability, fault-tolerance, simplicity and independence from the programming language or the data storage system. At the same time, Map/Reduce faces a number of obstacles when dealing with Big Data. A possible solution of them may be the Collect/Report Paradigm (CRP) and Natural Language Addressing (NLA) approach. It is suitable for storing Big Data in large information bases located on different storage systems – from personal computers up to cloud servers. An experimental Model of the CRP is presented in this paper. An experimental implementation of the CRP to process and store data is outlined. The structures of the input and output data are in the form of RDF triplets. The ease of implementation of this model and the benefits of its use are discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.