Abstract

Noise can be de ned as an example which seems to be inconsistent with the remaining ones in a data set. The presence of noise in data sets can decrease the performance of Machine Learning (ML) techniques in the problem analysis and also increase the time taken to build the induced hypothesis and its complexity. Data are collected from measurements made which represent a given domain of interest. In this sense, no data set is perfect. Measurement errors, incomplete, corrupted, wrong or distorted examples, equipment problems or human fails, besides many other related factors, help contaminating the data, and this is particularly true for data sets with high dimensionality. For this reason, noise detection is a critical task, specially in domains which demand security and trustworthiness, since the presence of noise can lead to situations which degrade the system performance or the security and trustworthiness of the involved information. Algorithms to detect and remove noise may increase trustworthiness of noisy data sets. Based on that, this work evaluates distance-based noise detection techniques, in which noise removal is done by a pre-processing phase, in gene expression classi cation problems, characterized by the presence of noise, high dimensionality and complexity. The objective is to improve the performance of ML techniques used to solve these problems. Next, ensembles of noise detection techniques are developed in order to analyze the possibility to further improve the performance obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call