Abstract

Convolutional neural networks have reached a stage of development where they can be applied to tasks such as posture, face, voice and situation recognition with high accuracy, but inference of classic high precision models require a powerful server that creates high requirements for the reliability and bandwidth of the network that connects the server to the Edge device.Traditional architecture of Internet of Things can be represented as follows: information from the outside world is accumulated through sensors, cameras or input by the user; the data obtained is aggregated and subjected to initial processing by the edge device (the edge unit is typically characterized by small dimensions, low cost, and low power consumption); the processed and compressed data is transmitted to the remote servers via the Internet, where it is further processed using more resource-intensive and complex algorithms; the cloud computes results and updates information in the database; the new, more up-to-date information goes down to the edge device, where it is used to change the status of certain indicators, engines, turn on or off the light, etc.At the same time, there are tasks in which delays of the network are unacceptable, its reliability cannot be guaranteed, raw data from the microphone, camera, or other sensors may contain sensitive information that is not desirable to send to the server, the expected number of devices will create such load of network and datacenter that will be too expensive to process.For these cases, technological solutions have been developed that allow performing neural computations on the edge device in real time, that is, recognizing visual images of 25 frames per second. These techniques includes: development of new neural network models capable of solving similar problems using fewer coefficients and simpler activation functions, but with not too high a drop in accuracy; applying specialized hardware, capable of performing calculations with higher efficiency (efficiency refers to the ratio of the number of operations per unit of time to the power consumed); optimization of existing neural network models by reducing the accuracy of coefficients representation, ignoring the coefficients close to zero, etc. The first class includes Google MobileNet V1 and V2 development. They are based on previous models but have fewer layers and neurons in them, which reduced the total number of parameters to 0.5-4.2 million. The second class includes ASIC (Application-specific integrated circuit), FPGA (Field-Programmable Gate Array) and GPU (Graphic Processing Unit). ASICs are specifically designed for a specific neural network model and have the highest performance and energy efficiency available today. The third class includes quantization algorithms and a thorough analysis of the structure of the neural network to identify which parts of it contribute most to the correct result. Such conversion, although loss of information, allows for accuracy close to the original.The combination of all the techniques allows you to achieve the task.

Highlights

  • Convolutional neural networks have reached a stage of development

  • they can be applied to tasks

  • bandwidth of the network that connects the server to the Edge device

Read more

Summary

Introduction

Виконано порівняння класичної архітектури інтернету речей з обчисленнями у хмарі та більш сучасної – з частково перенесеною логікою на крайовий пристрій. Розглянуто які технології можуть бути застосовані для запровадження подібної системи та описано методику, що дозволяє досягти поставленої мети – тобто виконати корисні обчислення на крайовому пристрої у реальному часі. Ключові слова — вбудовані системи; крайовий пристрій; нейромережа; квантизація нейромережі; згорткова нейромережа; інтернет речей.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.