Abstract

For enhanced safety, autonomous Guided Vehicles (AGV) in warehouses complement their local sensors with data from environmental sensors to improve the detection of humans not in view of the AGV. The problem with many existing approaches is that they broadcast all environmental data in plain, leading to both privacy and bandwidth issues. In our privacy-preserving amalgamated machine learning approach, the environment sensors do data pre-processing and only send abstracted features to the AGVs, solving both privacy and bandwidth problems. The AGVs use these abstracted features in an internal machine learning model to quantify the safety-level of the environment. We have implemented a physical demonstrator using minimal hardware, following this principle. Our demo AGV ran positioning, navigation and the safety inference in real time on a simple Raspberry Pi, solely relying on the amalgamated features to avoid both static and dynamic objects. By only broadcasting abstract features, the safety level can be maintained, the privacy is preserved, and the used bandwidth is reduced by more than 2 orders of magnitude, compared to sharing all video streams.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.