Abstract

Big data analytics faces the complexities of dealing with big, unstructured, and rapidly changing data that traditional methods of managing are faltering. The exponential growth of the internet and the digital economy has led to a huge increase in demand for data storage and data analysis. This new type of information, often referred to as unstructured data or big data, includes documents, images, audio, video, and social media content, rather than the traditional structured data found in databases. Big Data Analytics aims to extract valuable insights from these vast amounts of information. The article explores diverse technologies, such as Apache Flume, Pig, Hive and ZooKeeper, along with the open-source MongoDB NoSQL database, which collectively make up the big data analytics system. This system is used to predict volumes, gain ideas, take proactive actions and make effective strategic decisions. The article delves into the implementation, use and impact of Big Data Analytics on the value of an enterprise, taking advantage of algorithms designed for large data sets such as Hadoop and MapReduce. In addition, he provides illustrative examples of big data processing, specifically leveraging the integration of Hadoop and Matlab to solve handwritten number recognition tasks with a neural network trained using the MNIST dataset. Moreover, the article briefly touches on software import substitution in the context of the Russian Railways company, noting the use of Postgres Pro and Greenplum DBMS, and the loginom analytical platform. The article also provides a brief description of approaches to predictive assessment of the operation, reliability and availability of Big Data Processing Systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call