Abstract
The gradual increase in latency-sensitive, real-time applications for embedded systems encourages users to share sensor data simultaneously. Streamed sensor data have deficient performance. In this paper, we propose a new edge-based scheduling method with high-bandwidth for decreasing driver-profiling latency. The proposed multi-level memory scheduling method places data in a key-value storage, flushes sensor data when the edge memory is full, and reduces the number of I/O operations, network latency, and the number of REST API calls in the edge cloud. As a result, the proposed method provides significant read/write performance enhancement for real-time embedded systems. In fact, the proposed application improves the number of requests per second by 3.5, 5, and 4 times, respectively, compared with existing light-weight FCN-LSTM, FCN-LSTM, and DeepConvRNN Attention solutions. The proposed application also improves the bandwidth by 5.89, 5.58, and 4.16 times respectively, compared with existing light-weight FCN-LSTM, FCN-LSTM, and DeepConvRNN Attention solutions.
Highlights
Over the years, deep learning algorithms have revolutionized the autonomous car industry by achieving higher accuracy and performance for the comfort of people
We divided the dataset into two non-overlapping sets, including 75% for a training set and 25% for a test set
We experimented with a convolutional neural network (CNN) configuration to achieve a model with low computational costs and high efficiency, which is appropriate for embedded applications
Summary
Deep learning algorithms have revolutionized the autonomous car industry by achieving higher accuracy and performance for the comfort of people. There is an end-to-end latency issue owing to the need for a higher level of computational resources when autonomous cars simultaneously request driver profiling. Edge computing enables driver-profiling services to reduce the end-to-end latency by providing services to users closer their vicinity. If resources are exhausted in edge computing, service migration must be performed seamlessly to fulfill requirements of the user. We propose a new in-memory data scheduling technique to provide locality awareness for real time execution and fulfillment of users/client (embedded systems) requirements. A novel architecture is proposed to deploy a deep learning framework for driver profiling inside the edge server with lower latency, despite a higher number of responses to requests. We present a new architecture for driver-profiling with deep learning techniques in cars with embedded system.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have