Aspect-Based Sentiment Analysis (ABSA) has received considerable attention in recent studies. Powerful pre-trained models were proposed which can be fine-tuned for many Natural Language Processing (NLP) tasks, including ABSA. However, fine-tuning these models needs a relatively large amount of labeled data. In this research, we propose EASE; an active learning framework to minimize manual labeling effort. We extend the active learning technique by incorporating the concept of sample diversity where similar samples are not selected for labeling. Furthermore, we maximize the utility of these samples by incorporating data augmentation. EASE was evaluated on three benchmark ABSA datasets from three different domains. The results show that the reduction of the number of needed labeled samples ranges from 88% to 94% among the three datasets while maintaining accuracy. Our results show that active learning is an effective approach to reduce manual labeling effort while maintaining comparable performance. Moreover, it is possible to reduce the number of labeled data even further by incorporating sample diversity and data augmentation while maintaining performance.