Abstract

IoT and edge devices dedicated to run machine vision algorithms are usually few years lagging currently available state-of-the-art technologies for hardware accelerators. This is mainly due to the non-negligible time delay required to implement and assess related algorithms. Among possible hardware platforms which are potentially being explored to handle real-time machine vision tasks, multi-core CPU and Graphical Processing Unit (GPU) platforms remain the most widely used ones over Field Programmable Gate Array (FPGA) and Application Specific Integrated Circuit (ASIC)-based platforms. This is mainly due to the availability of powerful and user friendly software development tools, in addition to their lower cost, and obviously their high computation power with reasonable form factor and power consumption. Nevertheless, the trend now is towards a System-On-Chip (SOC) processors which combine ASIC/FPGA accelerators with GPU/multicore CPUs. This paper presents different state of the art IoT and edge machine vision technologies along with their performance and limitations. It can be a good reference for researchers involved in designing state of the art IoT embedded systems for machine vision applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call