Abstract
The paper represents an approach for developing the edge AI system which is based on the modern MLOps concept. For the edge hardware part we utilize Nvidia Jetson Nano microcomputer which provides server side for network requests processing, data storage and machine learning models of self-deployment. We propose the working MLOps pipeline fully designed by the industrial software solutions like TensorFlow 2, Mlflow, Apache Airflow, which is integrated into the developed application. The considered pipeline scheme consists of three operational stages: a) data storage and processing, which stands for fetching the data from database, cleansing and transformation; b) machine learning modeling with synchronous hyper-parameters optimization and model registration; c) model deployment and serving. The whole pipeline is wrapped by the REST API created via FastAPI micro-framework and orchestrated using Apache Airflow service. To implement the described pipeline we chose the time dependant temperature data to be learned and short-term predicted by the GRU-based recurrent neural network. The latter one is tuned in terms of hyper-parameters configuration by genetic algorithm which is embedded into the second stage of the pipeline. Also, a design which combines Nvidia Jetson Nano server with the inference edge device like STM32 H745 microcontroller via sockets is discussed. Key words : edge computing, MLOps, machine learning, Mlflow, genetic algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.