AbstractReal-time AI has been evidenced to be supported by two models, namely edge computing and cloud computing. This research seeks to compare the two methods: Latency, security features, and data processing capabilities will be among the most critical aspects of comparison. Real-time AI applications are envisioned to be robust through 2020, most of which will be used in applications such as autonomous automobile systems and IoT devices that demand low latency for data processing. As data processing is outsourced in clouds, it is elastic and centralized, but this deal faces latency problems when the distance between the source of data and the processing center is large. On the other hand, edge computing refers to conducting computation closer to the data source, which could reduce latency and enhance everyday real-time performance.This research assesses the above models based on the literature review, technical papers, and case studies from industries that depend more on real-time AI. According to the research, edge computing is typically more effective for latency-sensitive workloads, while cloud computing outperforms throughput-intensive applications. Security concerns, however, present themselves as having dual effects; while advancing the privacy of handling data through edge computing, it elevates new risks. As such, this study concludes that no clear winner depends on the necessary application, therefore recommending a symmetric mode for the requirements, where both latency time and computational needs are required. Further studies should be conducted to establish the coordinated implementation approach of these models.
Read full abstract