Abstract

This thesis addresses the distributed and scalable pre-processing of Big Data sets, in order to obtain good quality data, known as Smart Data. Particularly, it focuses on classification problems, and on addressing the following characteristics: (a) imbalanced data; (b) redundancy; (c) high dimensionality; and (d) overlapping.The following specific objectives are established for the aforementioned purpose:
 
 Enable a state-of-the-art algorithm widely used for the treatment of class imbalance in traditional data scenarios (Small Data), to be able to obtain adequate results from large datasets in a distributed manner and in reasonable execution times.
 To design and to implement a fast and scalable methodology for the reduction in both instances and attributes for Big Data sets with high redundancy and dimensionality, while maintaining the predictive capacity of the original dataset.
 To design and to implement a strategy for scalable data characterisation in the context of Big Data classification, focusing on the ambiguous areas of the problem.
 To apply the knowledge acquired during the development phase to solve problems of interest related to humanitarian emergencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call