Abstract
IoT and edge devices dedicated to run machine vision algorithms are usually few years lagging currently available state-of-the-art technologies for hardware accelerators. This is mainly due to the non-negligible time delay required to implement and assess related algorithms. Among possible hardware platforms which are potentially being explored to handle real-time machine vision tasks, multi-core CPU and Graphical Processing Unit (GPU) platforms remain the most widely used ones over Field Programmable Gate Array (FPGA) and Application Specific Integrated Circuit (ASIC)-based platforms. This is mainly due to the availability of powerful and user friendly software development tools, in addition to their lower cost, and obviously their high computation power with reasonable form factor and power consumption. Nevertheless, the trend now is towards a System-On-Chip (SOC) processors which combine ASIC/FPGA accelerators with GPU/multicore CPUs. This paper presents different state of the art IoT and edge machine vision technologies along with their performance and limitations. It can be a good reference for researchers involved in designing state of the art IoT embedded systems for machine vision applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.