Abstract

Deep Learning (DL) has provided powerful tools for a wide range of different application areas, including visual perception and cognition in robotics, media monitoring, image analysis, and others. However, deploying DL tools in real applications comes with several challenges, since DL requires different pipelines compared to traditional computer vision tools. For example, specialized software frameworks, along with the appropriate hardware should be used for training and inference, since DL models are often too resource-intensive to be directly deployed. This issue is even more important in embedded applications, such as robotics and drone vision where significant computational power and energy constraints exist. Furthermore, the fragmentation of the DL landscape can further slow down their integration, since models produced with different tools are often not interoperable. At the same time, DL tools are typically designed to follow a static inference paradigm. However, many systems can provide active perception capabilities that are usually not used by the current generation of DL tools. In this talk, we will discuss the aforementioned challenges and present the Open Deep Learning Toolkit for Robotics (OpenDR) which aims to overcome many of these limitations that occur in robotics and many other areas, focusing on visual perception applications. As part of the talk, we will also briefly showcase practical examples of using the OpenDR toolkit, discussing how tools can be optimized and adapted to different inference platforms that are often used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call