Abstract

The purpose of this study is to examine existing deep learning techniques for addressing class imbalanced data. Effective classification with imbalanced data is an important area of research, as high class imbalance is naturally inherent in many real-world applications, e.g., fraud detection and cancer detection. Moreover, highly imbalanced data poses added difficulty, as most learners will exhibit bias towards the majority class, and in extreme cases, may ignore the minority class altogether. Class imbalance has been studied thoroughly over the last two decades using traditional machine learning models, i.e. non-deep learning. Despite recent advances in deep learning, along with its increasing popularity, very little empirical work in the area of deep learning with class imbalance exists. Having achieved record-breaking performance results in several complex domains, investigating the use of deep neural networks for problems containing high levels of class imbalance is of great interest. Available studies regarding class imbalance and deep learning are surveyed in order to better understand the efficacy of deep learning when applied to class imbalanced data. This survey discusses the implementation details and experimental results for each study, and offers additional insight into their strengths and weaknesses. Several areas of focus include: data complexity, architectures tested, performance interpretation, ease of use, big data application, and generalization to other domains. We have found that research in this area is very limited, that most existing work focuses on computer vision tasks with convolutional neural networks, and that the effects of big data are rarely considered. Several traditional methods for class imbalance, e.g. data sampling and cost-sensitive learning, prove to be applicable in deep learning, while more advanced methods that exploit neural network feature learning abilities show promising results. The survey concludes with a discussion that highlights various gaps in deep learning from class imbalanced data for the purpose of guiding future research.

Highlights

  • Supervised learning methods require labeled training data, and in classification problems each data sample belongs to a known class, or category [1, 2]

  • Results show that the Mean false error (MFE) and mean squared false error (MSFE) loss functions outperform mean squared error (MSE) loss in almost all cases

  • The loss functions should generalize to other domains with ease, but as seen in comparing the image and text performance results, performance gains will vary from problem to problem

Read more

Summary

Introduction

Supervised learning methods require labeled training data, and in classification problems each data sample belongs to a known class, or category [1, 2]. A well-known class imbalanced machine learning scenario is the medical diagnosis task of detecting disease, where the majority of the patients are Johnson and Khoshgoftaar J Big Data (2019) 6:27 healthy and detecting disease is of greater interest. In this example, the majority group of healthy patients is referred to as the negative class. The majority group of healthy patients is referred to as the negative class Learning from these imbalanced data sets can be very difficult, especially when working with big data [8, 9], and non-standard machine learning methods are often required to achieve desirable results. The deep MLP is the simplest deep learning model in terms of implementation, but it quickly becomes very resource intensive as the number of weighted connections quickly increases

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call