Abstract
This paper presents a data-efficient object detection framework that integrates YOLO with few-shot learning techniques to mitigate the challenges of large-scale annotated data dependency and small object detection. By incorporating Feature Pyramid Networks (FPN) and spatial attention mechanisms, the framework enhances detection accuracy for small objects. Additionally, the use of few-shot learning approaches — meta-learning, data augmentation, and transfer learning — enables the model to generalize effectively from limited data while preserving real-time inference speed. Experimental results demonstrate that the proposed framework excels in data-scarce scenarios, making it suitable for applications such as autonomous driving, aerial surveillance, medical imaging, and wildlife monitoring. Future research will focus on optimizing computational efficiency, enhancing cross-domain adaptability, and exploring advanced few-shot learning strategies. This work provides a scalable and effective solution for object detection in resource-limited environments.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have