The increasing number of individuals with disabilities—over 61 million adults in the United States alone—underscores the urgent need for technologies that enhance autonomy and independence. Among these individuals, millions rely on wheelchairs and often require assistance from another person with activities of daily living (ADLs), such as eating, grooming, and dressing. Wheelchair-mounted assistive robotic arms offer a promising solution to enhance independence, but their complex control interfaces can be challenging for users. Automating control through deep learning-based object detection models presents a viable pathway to simplify operation, yet progress is impeded by the absence of specialized datasets tailored for ADL objects suitable for robotic manipulation in home environments. To bridge this gap, we present a novel ADL object dataset explicitly designed for training deep learning models in assistive robotic applications. We curated over 112,000 high-quality images from four major open-source datasets—COCO, Open Images, LVIS, and Roboflow Universe—focusing on objects pertinent to daily living tasks. Annotations were standardized to the YOLO Darknet format, and data quality was enhanced through a rigorous filtering process involving a pre-trained YOLOv5x model and manual validation. Our dataset provides a valuable resource that facilitates the development of more effective and user-friendly semi-autonomous control systems for assistive robots. By offering a focused collection of ADL-related objects, we aim to advance assistive technologies that empower individuals with mobility impairments, addressing a pressing societal need and laying the foundation for future innovations in human–robot interaction within home settings.
Read full abstract