Abstract

An important platform is Big Data which has emerged as for data intensive computing. UniNetis the complex network infrastructure, which is one of the biggest research and education network in Thailand, the network analysis is crucial to operate and maintain the network. Netflow is one of the valuable tools for collect data from routers to log files for analyzing the problems. Unfortunately, Netflow generates large amount of data. Comparison between Hadoop platform and APACHE Spark by using TestBed to choose the appropriate storage and analysis data. Two format: Hadoop and APACHE Spark can both store data and process the analysis jobs. Typical Hadoop system operates in a simple network topology and difficult to work in complex network topology like UniNet network, which has two layers: Backbone and Distribution layers. In this case, it needs more than on Hadoop systems working together in this network. In principle, OpenFlow enables an application to adjust topology as required by the computation, providing additional network bandwidth to those resources requiring it and also support fault-tolerance when fail-over has happened.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call