Abstract

The rapid advancements in robotics and automation have highlighted the need for robotic systems capable of adapting to the diverse physical properties of objects. Traditional grippers often lack the versatility to handle both hard and soft objects without extensive reprogramming or hardware adjustments. Previous studies have explored various approaches to this challenge, including tactile sensors and force feedback mechanisms to distinguish object properties. For instance, research by Calandra et al. (2018) utilized deep learning with tactile data to enable robotic hands to identify objects and adjust grip accordingly. Similarly, the paper by Li et al. (2020) “Design and performance characterization of a soft robot hand with fingertip haptic feedback for teleoperation”, focuses on designing and characterizing a soft robotic hand with fingertip haptic feedback for teleoperation emphasizing real-time tactile sensing and feedback mechanisms. However, these studies primarily focus on tactile feedback or specialized hardware, limiting their applicability in scenarios where such systems are not available or practical.This study introduces a novel machine learning-based approach, focusing on the use of visual data alone to classify and adapt to the hardness or softness of objects. By leveraging the CIFAR-100 dataset, we trained a deep learning model based on the ResNet50 architecture, achieving significant results in binary classification of hard and soft objects. The CIFAR-100 dataset, consisting of 100 diverse object categories/classes, was reorganized into two classes: hard (39 categories) and soft (61 categories). The ResNet50 model, pre-trained on ImageNet, was fine-tuned specifically for this task, with modifications to the last 210 layers to enhance its adaptability.Data augmentation techniques, including rotations, translations, shearing, zooming, and horizontal flipping, were applied to simulate real-world variations, ensuring robust learning. The model was further refined with additional fully connected layers, dropout, and batch normalization to prevent overfitting. Optimized using the AdamW optimizer, the model achieved a training accuracy of 83.31% and a validation accuracy of 80.25%, with a test accuracy of 80%. The precision, recall, and F1-scores were 0.82, 0.86, and 0.84 for the soft object class, and 0.76, 0.71, and 0.73 for the hard object class, demonstrating the model’s effectiveness in distinguishing between hard and soft objects without the need for specialized sensors. It is also observed that the accuracy of the model in relation to hard objects is significantly lesser as compared to soft objects. It is understood that the CIFAR-100 dataset (comprising of 100 classes) is inadequate for model training, so we are exploring the ILSVRC (ImageNet subset) dataset for model training in future. The research is useful in a variety of fields where the focus lies on object handling. In everyday life, we encounter a wide array of objects with varying degrees of hardness, requiring different levels of care and precision during handling. By relying on visual data and machine learning algorithms, as demonstrated in this research, robotic systems can become more autonomous and versatile, reducing the burden on human operators and improving overall efficiency. This approach can lead to cost savings by reducing the need for specialized hardware, such as tactile sensors, which are often expensive and difficult to integrate.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.