Abstract

Nowadays, modern communication networks play an important role such as in electric power systems, mobile cloud computing, smart city evolution, and personal healthcare. The employed novel telecommunication technologies make data collection much easier for system operation and control, enable more efficient data transmission for mobile applications, and promise a more intelligent sensing and monitoring for metropolitan city regions. Meanwhile, we are witnessing an unprecedented rise in volume, variety and velocity of information in modern communication networks. A large volume of data are generated by our digital equipments such as mobile devices and computers, smart meters and household appliances, as well as surveillance cameras and sensorequipped mass rapid transit around the city. The information exposition of big data in modern communication networks makes statistical and computational methods significantly important for data analysis, processing, and optimization. The network operators or service providers who can develop and exploit efficient methods to tackle big data challenges will ensure network security and resiliency, gain market share, increase revenue with distinctive quality of service, as well as achieve intelligent network operation and management. The unprecedented “big data,” reinforced by communication and information technologies, presents us opportunities and challenges. On the one hand, the inferential power of algorithms, which have been shown to be successful on modest-sized data sets, may be amplified by the massive data set. Those data analytic methods for the unprecedented volumes of data promises to improve personalized business model design, intelligent social network analysis, smart city development, efficient healthcare and medical data management, and the smart grid evolution. On the other hand, the sheer volume of data makes it unpractical to collect, store, and process the data set in a centralized fashion. Moreover, the massive data sets are noisy, incomplete, heterogeneous, structured, prone to outliers, and vulnerable to cyber attacks. The error rates, which are part and parcel of any inferential algorithm, may also be amplified by the massive data. Finally, the “big data” problems often come with time constraints, where a mediumquality answer that is obtained quickly can be more useful than a high-quality answer that is obtained slowly. Overall, we are facing a problem in which the classic resources of computation, such as time, space, and energy, are intertwined in complex ways with the massive data resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call