Recently, AI systems such as autonomous driving and smart homes have become integral to daily life. Intelligent multi-sensors, once limited to single data types, now process complex text and image data, demanding faster and more accurate processing. While integrating NPUs and sensors has improved processing speed and accuracy, challenges like low resource utilization and long memory latency remain. This study proposes a method to reduce processing time and improve resource utilization by virtualizing NPUs to simultaneously handle multiple deep-learning models, leveraging a hardware scheduler and data prefetching techniques. Experiments with 30,000 SA resources showed that the hardware scheduler reduced memory cycles by over 10% across all models, with reductions of 30% for NCF and 70% for DLRM. The hardware scheduler effectively minimized memory latency and idle NPU resources in resource-constrained environments with frequent context switching. This approach is particularly valuable for real-time applications like autonomous driving, enabling smooth transitions between tasks such as object detection and route planning. It also enhances multitasking in smart homes by reducing latency when managing diverse data streams. The proposed system is well suited for resource-constrained environments that demand efficient multitasking and low-latency processing.
Read full abstract