Abstract

Human–robot collaboration (HRC) has been recognized as a potent pathway towards mass personalization in the manufacturing sector, by leveraging the synergy of human creativity and robotic precision. Previous approaches rely heavily on visual perception to autonomously comprehend the HRC environment. However, the inherent ambiguity in human–robot communication cannot be consistently neutralized by relying solely on visual cues. With the recently soaring popularity of large language models (LLMs), the consideration of language data as a complementary information source has increasingly drawn research attention, while the application of such large models, particularly within the context of HRC in manufacturing, remains largely under-explored. In response to this gap, a vision-language reasoning approach is proposed to mitigate the communication ambiguity prevalent in human–robot collaborative manufacturing scenarios. A referred object retrieval model is first designed to alleviate the object–reference ambiguity in the human language command. This model is then seamlessly integrated into an LLM-based robotic action planner to achieve an improved HRC performance. The effectiveness of the proposed approach is demonstrated empirically through a series of experiments conducted on the object retrieval model and its application in a human–robot collaborative assembly case.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.