Abstract

Big Data is being touted as the next big thing arousing technical challenges that confront both academic research communities and commercial IT deployment. The root sources of Big Data are founded on infinite data streams and the curse of dimensionality. It is generally known that data which are sourced from data streams accumulate continuously making traditional batch-based model induction algorithms infeasible for real-time data mining. In the past many methods have been proposed for incrementally data mining by modifying classical machine learning algorithms, such as artificial neural network. In this paper we propose an incremental learning process for supervised learning with parameters optimization by neural network over data stream. The process is coupled with a parameters optimization module which searches for the best combination of input parameters values based on a given segment of data stream. The drawback of the optimization is the heavy consumption of time. To relieve this limitation, a loss function is proposed to look ahead for the occurrence of concept-drift which is one of the main causes of performance deterioration in data mining model. Optimization is skipped intermittently along the way so to save computation costs. Computer simulation is conducted to confirm the merits by this incremental optimization process for neural network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.