Abstract
In a recent French robotic contest, the objective was to develop a multi-robot system able to autonomously map and explore an unknown area while also detecting and localizing objects. As a participant in this challenge, we proposed a new decentralized Markov decision process (Dec-MDP) resolution based on distributed value functions (DVF) to compute multi-robot exploration strategies. The idea is to take advantage of sparse interactions by allowing each robot to calculate locally a strategy that maximizes the explored space while minimizing robots interactions. In this paper, we propose an adaptation of this method to improve also object recognition by integrating into the DVF the interest in covering explored areas with photos. The robots will then act to maximize the explored space and the photo coverage, ensuring better perception and object recognition.
Highlights
Some key challenges of robotics reported in the recent roadmap for U.S robotics [1], e.g., planetary missions and service robotics, require mobile robots to travel autonomously around unknown environments and to augment metric maps with higher-order semantic information such as the location and the identity of objects in the environment
We proposed a new decentralized Markov decision process (Dec-MDP) resolution technique based on the distributed value function (DVF) to consider sparse interactions
To improve the complexity for solving Dec-MDPs, we proposed an interaction-oriented resolution based on distributed value functions (DVF)
Summary
Some key challenges of robotics reported in the recent roadmap for U.S robotics [1], e.g., planetary missions and service robotics, require mobile robots to travel autonomously around unknown environments and to augment metric maps with higher-order semantic information such as the location and the identity of objects in the environment. The ability of the mobile robots that gather the necessary information to obtain a useful map for navigation is called autonomous exploration. This was the central topic of a DGA1/NRA2 robotic challenge, in which multiple robots have to explore and map some unknown indoor area while recognizing and localizing objects in this area. We proposed a new Dec-MDP (decentralized Markov decision process) resolution technique based on the distributed value function (DVF) to consider sparse interactions. T : S × A × S → [0; 1] is a transition function and T (s, a, s ) is the probability of the I robots transitioning from joint state s to s after performing joint action a. If the global state of the system is collectively totally observable, the Dec-POMDP is reduced to a Dec-MDP
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.