Abstract
Our previous study proposed an automatic fall risk assessment and related risk reduction measures. A nursing system to reduce patient accidents was also developed, therefore reducing the caregiving load of the medical staff in hospitals. However, there are risks associated with artificial intelligence (AI) in applications such as assistant mobile robots that use deep reinforcement learning. In this paper, we discuss safety applications related to AI in fields where humans and robots coexist, especially when applying deep reinforcement learning to the control of autonomous mobile robots. First, we look at a summary of recent related work on robot safety with AI. Second, we extract the risks linked to the use of autonomous mobile assistant robots based on deep reinforcement learning for patients in a hospital. Third, we systematize the risks of AI and propose sample risk reduction measures. The results suggest that these measures are useful in the fields of clinical and industrial safety.
Highlights
OpenAI, Stanford University, and the University of California ( Berkeley) have reported five main challenges associated with the safe use of artificial intelligence (AI) [8]: (1) avoiding negative side effects, including adverse effects on the surroundings, the interactions with humans and the environment, and vandalism; (2) avoiding reward hacking, which takes into consideration the measures for achieving desire and malicious hacking from the outside; (3) scalable oversight for proper and efficient feedback; (4) safe exploration to secure safety, such as during learning by simulation; (5) robustness toward distributional shifts, to manage changes in cases that are significantly different from the learning environment
We extracted the risks and propose risk reduction measures when applying deep reinforcement learning to the control of an autonomous mobile robot
We mainly considered as important the design, tolerance of AI autonomy, and mind4..HDoiswceuvsseiro, neven if the risk is high, there is a possibility of applying AI, and the advantages and disadvantTahgeesrisshkosuolfdabuetocnoommpoaursedm.oIbf itlheeaassdivstaannttargoebsoot uctownterioglhbtahseddiosnaddveaenptraegiensfo, irtceismneenctelsesaarrnyintgo take mareeacsounrseisdaergeadintostbtehehidgihs.aWdveamntaaignelys.considered as important the design, tolerance of AI autonomy, Aansdamfuintudr.eHwoworekv,eirn, eovrednerifttohecornisfikrims htihgeh,etfhfeecretiviseanepsosssoifbtilhitiys pofroapppolsyailn, gnoAtIo, annlyd tthhee saidmvaunlatatigoens but aalsnod thdeisacdomvapnatargiseosnsohfouthlde rbeesuclotsmwpahreend.thIfe tnhuersaedwvainthtaagnesd owuittwhoeiugthrothbeotds icsaardrvyanotuatgeths,e irtisiks assessnmeceensstaarnydtothteakreismk ereadsuurcetsioangaminesatstuhreesdsiseapdavraantetlaygeins.the actual hospital
Summary
There have been significant improvements in accuracy (with regard to deep learning) in fields such as image recognition, behavior recognition, object detection, and scene recognition. These techniques have begun to surpass the human abilities in some fields. In March 2016, Microsoft’s conversational AI “Tay” started to talk about racial discrimination, sexism, and conspiracy theories, which were learned through Twitter in technical experiments. This prompted Microsoft to immediately shut Tay down. It can be difficult to use AI as an element of safety related parts
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.