Abstract

Venipuncture robots have superior perception and stability to humans and are expected to replace manual venipuncture. However, their use is greatly restricted because they cannot make decisions regarding the puncture sites. Thus, this study presents a multi-information fusion method for determining puncture sites for venipuncture robots to improve their autonomy in the case of limited resources. Here, numerous images have been gathered and processed to establish an image dataset of human forearms for training the U-Net with the soft attention mechanism (SAU-Net) for vein segmentation. Then, the veins are segmented from the images, feature information is extracted based on near-infrared vision, and a multiobjective optimization model for puncture site decision is provided by considering the depth, diameter, curvature, and length of the vein to determine the optimal puncture site. Experiments demonstrate that the method achieves a segmentation accuracy of 91.2% and a vein extraction rate of 86.7% while achieving the Pareto solution set (average time: 1.458 s) and optimal results for each vessel. Finally, a near-infrared camera is applied to the venipuncture robot to segment veins and determine puncture sites in real time, with the results transmitted back to the robot for an attitude adjustment. Consequently, this method can enhance the autonomy of venipuncture robots if implemented dramatically.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.