Ultra-Low Latency V2X Systems with AI-Driven Resource Optimization
Achieving ultra-low latency in Vehicle-to-Everything (V2X) communication is essential for ensuring the safety and effectiveness of autonomous vehicles (AVs). However, existing systems often struggle to meet the stringent latency demands, particularly in complex and rapidly changing urban environments. This study introduces an innovative framework that utilizes artificial intelligence (AI) for dynamic resource allocation in V2X networks. By integrating real-time data analysis, edge computing, and 5G capabilities, the proposed approach effectively minimizes latency. Simulation results indicate up to a 35% reduction in latency compared to conventional models, underscoring the potential of AI in enhancing the responsiveness and reliability of V2X systems. These findings offer a significant step toward making autonomous vehicle deployments more viable in smart cities.
- Research Article
1
- 10.3390/vehicles7020032
- Apr 2, 2025
- Vehicles
The testing and pilot operations of autonomous vehicles are currently booming in terms of real-world operations. Although the validation and verification methods are not standardized, nor is the legislation, as well as the methodology of data collection on autonomous vehicles’ performance and safety. The safety of autonomous vehicles can be inferred from the collision and disengagement reports provided by manufacturers and operators. This report documents instances when a human driver or operator took control of an autonomous vehicle during testing in detail. Disengagement reports are primarily aimed at safety and performance evaluation of autonomous vehicles, but can they be the basis for determining the readiness of autonomous driving technology and technological progress? This study analyzes disengagement reports to assess their utility in determining autonomous vehicles’ progress and readiness. Our findings indicate a declining trend in reported disengagements, despite increased operational distances, suggesting possible improvements in autonomous vehicle technology. However, disparities in data collection, varying operational design domains, and inconsistent reporting practices among manufacturers limit direct comparability. These factors challenge the reliability of disengagement reports as a definitive measure of technological evolution. The study highlights the need for more standardized and transparent reporting to better assess autonomous vehicle safety and development trends.
- Research Article
2
- 10.12694/scpe.v20i2.1558
- May 2, 2019
- Scalable Computing: Practice and Experience
Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies
- Research Article
34
- 10.1109/jiot.2021.3099164
- Mar 1, 2022
- IEEE Internet of Things Journal
In recent years, many deep learning models have been adopted in autonomous driving. At the same time, these models introduce new vulnerabilities that may compromise the safety of autonomous vehicles. Specifically, recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3-D object detection models. Although driving safety is the ultimate concern for autonomous driving, there is no comprehensive study on the linkage between the performance of deep learning models and the driving safety of autonomous vehicles under adversarial attacks. In this article, we investigate the impact of two primary types of adversarial attacks, perturbation attacks, and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models. In particular, we consider two state-of-the-art models in vision-based 3-D object detection: 1) Stereo R-CNN and 2) DSGN. To evaluate driving safety, we propose an end-to-end evaluation framework with a set of driving safety performance metrics. By analyzing the results of our extensive evaluation experiments, we find that: 1) the attack’s impact on the driving safety of autonomous vehicles and the attack’s impact on the precision of 3-D object detectors are decoupled and 2) the DSGN model demonstrates stronger robustness to adversarial attacks than the Stereo R-CNN model. In addition, we further investigate the causes behind the two findings with an ablation study. The findings of this article provide a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.
- Book Chapter
1
- 10.62311/nesx/46687
- Nov 1, 2024
Abstract: This chapter explores how AI-enabled edge computing is revolutionizing the Internet of Things (IoT) by delivering real-time optimization and transforming data processing capabilities. By integrating AI algorithms directly into edge devices, edge computing reduces latency, enhances real-time decision-making, and enables IoT systems to operate with greater efficiency. The chapter discusses various applications, including predictive maintenance, anomaly detection, and dynamic optimization in sectors such as smart cities, healthcare, manufacturing, and autonomous vehicles. It also addresses the technical challenges of deploying AI at the edge, including hardware limitations, data security concerns, and the need for scalable infrastructure. Future trends, such as the integration of 5G and the potential impact of quantum computing, are explored as game-changers in the evolution of AI-driven edge computing for IoT optimization. Keywords: AI-enabled edge computing, IoT, real-time optimization, latency reduction, predictive maintenance, anomaly detection, smart cities, healthcare, manufacturing, autonomous vehicles, data security, 5G, quantum computing, scalable infrastructure.
- Research Article
- 10.61856/tpxv1543
- Nov 27, 2024
- Gateway Journal for Modern Studies and Research (GJMSR)
The sixth generation (6G) of communication systems represents the next leap in wireless technology, promising ultra-fast speeds, greater efficiency, and revolutionary new applications. Building on the evolution from 1G’s analog systems to today’s 5G, 6G will integrate advanced technologies such as artificial intelligence (AI), virtual reality (VR), the Internet of Everything (IoE), and terahertz (THz) communications to deliver terabit-level data rates and ultra-low latency. This will enable applications like autonomous vehicles, immersive digital realities, industrial automation, remote healthcare, and smart cities. Research shows that 6G will focus strongly on security, privacy, and user experience, addressing challenges like network capacity and reliability. Studies highlight the vital role of machine learning, edge computing, massive MIMO, and mmWave/THz spectrum in realizing 6G’s vision. Extensive surveys by various scholars have explored 6G’s potential for integrating quantum computing, AI-driven network optimization, UAV networks, space and deep-sea connectivity, and bio-nano communications. Ultimately, 6G aims to connect billions of devices globally, bridging connectivity gaps in remote regions and enabling transformative services across industries. By overcoming technical and societal challenges, 6G is expected to redefine global communication infrastructure and support unprecedented levels of interconnectivity and innovation.
- Research Article
- 10.52458/28374061.v1.iss4.ijtaia.a3.2023
- Jan 1, 2023
- International Journal of Technological Advancements and Industrial Applications
The rapid evolution of wireless communication has led to the emergence of 5G technology, revolutionizing global connectivity and enabling advanced applications across various industries. This review paper provides a comprehensive analysis of the evolution of 5G technology from 2000 to 2024, highlighting the key advancements, challenges, and future prospects. The transition from 3G and 4G to 5G has been driven by the need for higher data speeds, ultra-low latency, and massive device connectivity to support emerging technologies such as the Internet of Things (IoT), smart cities, autonomous vehicles, and artificial intelligence (AI)-powered applications. This paper explores the technological milestones that have shaped 5G, including millimeter-wave (mmWave) spectrum, massive MIMO, network slicing, and edge computing. It also examines the challenges associated with 5G deployment, such as infrastructure costs, security vulnerabilities, regulatory concerns, and spectrum allocation issues. Furthermore, the impact of 5G on industries, including healthcare, transportation, and industrial automation, is discussed to showcase its transformative potential. Finally, the paper outlines the future directions beyond 5G, discussing early developments toward 6G networks and next-generation wireless technologies. The findings of this review provide valuable insights into the continuous evolution of mobile networks and offer a foundation for future research in wireless communication technologies.
- Dissertation
- 10.21268/20200123-0
- Jan 23, 2020
Autonomous vehicles will share the road with human drivers within the next couple of years. This will revolutionize road trac and provide a positive benet for road safety, trac density, emissions, and demographic changes. One of the signicant open challenges is the lack of established and cost-ecient veri- cation and validation approaches for assuring the safety of autonomous vehicles. The general public and product liability regulations impose high standards on manufacturers regarding the safe operation of their autonomous vehicles. The vast number of real- world trac situations have to be considered in the verication and validation. Todays conventional engineering methods are not adequate for providing such guarantees for au- tonomous vehicles in a cost-ecient way. One strategy for reducing the costs of quality assurance is transferring a signicant part of the verication and validation from road tests to (system-level) simulations. The vast number and high complexity of real-world situations complicate the exhaustive verication of autonomous vehicles in simulations. It is not clear, how simulations address the vast number of real-world situations with sucient realism and how their results transfer to the real road. Extensive coverage of real-world situations in simulations requires the integration of de- velopment and operation. This thesis presents an engineering approach that integrates the development and operation of autonomous vehicles seamlessly using runtime moni- toring. The runtime monitoring veries if autonomous vehicles satisfy their requirements and operate within safe limits which have been veried in the simulations. Safety of autonomous vehicles is subject to the scope of veried trac situations in simulations. Systematic and comprehensive simulations support the improvement of autonomous vehicles and coverage of trac situations. Results of the runtime monitoring during operation are transferred to the development for the verication of autonomous vehicles and their safe limits in simulations with additional trac situations. The incomplete verication of autonomous vehicles for the vast number of real-world trac situations in simulations requires the validation of simulation results and addi- tional monitoring in the real world. Results from simulations are transferred to the runtime monitoring during operation in the real world for validating the realism of the simulations and maintaining the vehicle safety in critical situations. Vehicle data and real-world situations possess high complexities and, therefore, impact the complexity and eciency of the verication in simulations. The runtime monitoring abstracts from internal data of autonomous vehicles and real-world situations in the evaluation by introducing an abstract semantic representation from natural language requirements. A case study evaluates the engineering approach for an industrial lane change assistant and real-world trac data recorded in road tests on German highways.%%%%Autonome Fahrzeuge werden in den…
- Research Article
1
- 10.52783/jisem.v10i5s.667
- Jan 24, 2025
- Journal of Information Systems Engineering and Management
As urbanization accelerates, smart cities are emerging as innovative ecosystems that integrate technology to address challenges related to sustainability, mobility, and infrastructure. Among these technologies, edge computing has gained prominence as a transformative solution to optimize data processing and resource management in urban environments. This paper explores the role of edge computing in enabling efficient, real-time decision-making by bringing computational power closer to data sources. Unlike traditional cloud-centric models, edge computing reduces latency, enhances data security, and improves bandwidth utilization by distributing data processing across decentralized nodes. The integration of edge computing in smart cities supports various applications, including intelligent transportation systems, energy-efficient smart grids, and real-time public safety monitoring. By processing data locally, edge devices can handle massive volumes of information generated by Internet of Things (IoT) devices, ensuring seamless service delivery without overwhelming centralized systems. Furthermore, this decentralized approach enhances resilience by reducing dependency on remote servers, a crucial factor for mission-critical urban applications. A significant focus of this paper is on resource management, particularly the allocation of computational resources across edge nodes. Strategies such as dynamic resource scheduling, load balancing, and adaptive task offloading are analyzed for their effectiveness in maintaining operational efficiency. Moreover, the research highlights the importance of leveraging machine learning and artificial intelligence algorithms within edge computing frameworks to predict traffic patterns, optimize energy consumption, and enhance waste management systems. Security and privacy concerns, often considered barriers to edge computing adoption, are addressed through advanced encryption techniques and secure communication protocols. This paper also evaluates challenges associated with edge computing deployment, such as hardware limitations, interoperability issues, and the need for robust regulatory frameworks. Case studies from leading smart city projects illustrate successful implementations and offer insights into overcoming these obstacles. In addition to technical aspects, this research underscores the socioeconomic benefits of edge computing in urban settings. Improved public services, reduced environmental impact, and cost-effective infrastructure management demonstrate the potential of edge computing to revolutionize city living. By enabling real-time analytics and localized decision-making, edge computing supports a more responsive and adaptive urban ecosystem. The findings presented in this paper emphasize the critical role of edge computing in bridging the gap between urban challenges and technological solutions. As cities continue to evolve, adopting edge computing technologies will not only enhance operational efficiency but also foster innovation, sustainability, and inclusivity. Future research directions include exploring hybrid models combining edge and cloud computing, advancing hardware capabilities, and developing standardized frameworks to accelerate adoption. This paper contributes to the growing body of knowledge on edge computing, offering a comprehensive analysis of its applications, challenges, and potential in shaping the future of smart cities. By optimizing data processing and resource management, edge computing emerges as a cornerstone technology for creating smarter, more resilient urban environments.
- Research Article
42
- 10.1109/access.2022.3183634
- Jan 1, 2022
- IEEE Access
With the increasing stringent QoS constraints (e.g., latency, bandwidth, jitter) imposed by novel applications (e.g., e-Health, autonomous vehicles, smart cities, etc.), as well as the rapidly increasing number of connected IoT (Internet of Things) devices, the core network is becoming increasingly congested. To cope with those constraints, Edge Computing (EC) is emerging as an innovative computing paradigm that leverages Cloud computing and brings it closer to the customer. “EC” refers to transferring computing power and intelligence from the central Cloud to the network’s Edge. With that, EC promotes the idea of processing and caching data at the Edge, thus reducing network congestion and latency. This paper presents a detailed, thorough, and well-structured assessment of Edge Computing and its enabling technologies. Initially, we start by defining EC from the ground up, outlining its architectures and evolution from Cloudlets to Multi-Access Edge Computing. Next, we survey recent studies on the main cornerstones of an EC system, including resource management, computation offloading, data management, network management, etc. Besides, we emphasized EC technology enablers, starting with Edge Intelligence, the branch of Artificial Intelligence (AI) that integrates AI models at resource-constrained edge nodes with significant heterogeneity and mobility. Then, moving on to 5G and its empowering technologies, we explored how EC and 5G complement each other. After that, we studied virtualization and containerization as promising hosting runtime for edge applications. Further to that, we delineated a variety of EC use-case scenarios, e.g., smart cities, e-Health, military applications, etc. Finally, we concluded our survey by highlighting the role of EC integration with future concerns regarding green energy and standardization.
- Conference Article
8
- 10.1109/glocom.2018.8648050
- Dec 1, 2018
Vehicular communication network is a core application scenario in the fifth generation (5G) mobile communication system which requires ultra high data rate and ultra low latency. Most recently, non-orthogonal multiple access (NOMA) has been regarded as a promising technique for future 5G systems due to its capability in significantly improving the spectral efficiency and reducing the data transmission latency. In this paper, we propose to introduce NOMA in D2D-enabled V2X networks, where resource sharing based on spatial reuse for different V2X communications are permitted through centralized resource management. Considering the complicated interference scenario caused by NOMA and spatial reuse-based resource sharing in the investigated NOMA-integrated V2X networks, we construct an interference hypergraph to model the interference relationships among different communication groups. In addition, based on the constructed hypergraph, we further propose an interference hypergraph-based resource allocation (IHG-RA) scheme with cluster coloring algorithm, which can lead to both effective and efficient QoS-guaranteed resource block (RB) assignment with low computational complexity. Simulation results verify the efficiency of our proposed IHG-RA scheme for NOMA-integrated V2X communications in improving the network sum rate.
- Research Article
128
- 10.1109/tits.2021.3119921
- Mar 1, 2022
- IEEE Transactions on Intelligent Transportation Systems
Speech emotion recognition (SER) is becoming the main human–computer interaction logic for autonomous vehicles in the next generation of intelligent transportation systems (ITSs). It can improve not only the safety of autonomous vehicles but also the personalized in-vehicle experience. However, current vehicle-mounted SER systems still suffer from two major shortcomings. One is the insufficient service capacity of the vehicle communication network, which is unable to meet the SER needs of autonomous vehicles in next-generation ITSs in terms of the data transmission rate, power consumption, and latency. Second, the accuracy of SER is poor, and it cannot provide sufficient interactivity and personalization between users and vehicles. To address these issues, we propose an SER-enhanced traffic efficiency solution for autonomous vehicles in a 5G-enabled space–air–ground integrated network (SAGIN)-based ITS. First, we convert the vehicle speech information data into spectrograms and input them into an AlexNet network model to obtain the high-level features of the vehicle speech acoustic model. At the same time, we convert the vehicle speech information data into text information and input it into the Bidirectional Encoder Representations from Transformers (BERT) model to obtain the high-level features of the corresponding text model. Finally, these two sets of high-level features are cascaded together to obtain fused features, which are sent to a softmax classifier for emotion matching and classification. Experiments show that the proposed solution can improve not only the SAGIN’s service capabilities, resulting in a large capacity, high bandwidth, ultralow latency, and high reliability, but also the accuracy of vehicle SER as well as the performance, practicality, and user experience of the ITS
- Conference Article
20
- 10.1109/ieee.iciot.2017.12
- Jun 1, 2017
With the advances in the areas of mobile computing and wireless communications, V2X systems have become a promising technology enabling deployment of applications providing road safety, traffic efficiency and infotainment. Due to their increasing popularity, V2X networks have become a major target for attackers, making them vulnerable to security threats and network conditions, and thus affecting the safety of passengers, vehicles and roads. Existing research in V2X does not effectively address the safety, security and performance limitation threats to connected vehicles, as a result of considering these aspects separately instead of jointly. In this work, we focus on the analysis of the tradeoffs between safety, security and performance of V2X systems and propose a dynamic adaptability approach considering all three aspects jointly based on application needs and context to achieve maximum safety on the roads using an Internet of vehicles. Experiments with a simple V2V highway scenario demonstrate that an adaptive safety/security approach is essential and V2X systems have great potential for providing low reaction times.
- Research Article
- 10.31854/2307-1303-2024-12-2-40-47
- Dec 23, 2024
- Telecom IT
Problem statement. Due to their mobility, flexibility, ease of deployment, and low cost, Autonomous Aerial Vehicle (AAV) play an important role in future wireless networks. However, their pratical implementation faces challenges, including energy constraints, dynamic channel variations, interference management, and the need for efficient resource allocation to ensure seamless connectivity for ground users. Traditional optimization methods often fail to adapt to these complexities in real-time, limiting the effectiveness of AAV-assisted wireless networks. The aim of the work provides a comprehensive review of resource allocation in AAV-assisted wireless networks, focusing on power and bandwidth optimization strategies, as well as the key challenges in ensuring efficient and reliable communication. Methods: used in this study include a systematic review of existing literature, analyzing optimization approaches such as Game Theory, Artificial Intelligence for efficient resource allocation in AAV-assisted wireless networks. Novelty: this study analyzes resource allocation challenges in AAV-assisted network, focusing on the interdependence of power and bandwidth allocation. It explores optimization techniques like Game Theory, and Artificial Intelligence. Results. The analysis in this paper demonstrates that Game Theory and Artificial Intelligence based approaches significantly improve resource allocation efficiency. Additionally, the study identifies key challenges, including heterogeneous density network, security concerns, and complex channel modeling, providing insights for future research. Practical / Theoretical Relevance: This study advences the theoretical understanding of resource allocation in AAV-assisted wireless networks by integrating optimization strategies from Game Theory and Artificial Intelligence. Pratically, it provides insight into enhancing network efficiency, adaptability and security, making AAV-based communication more viable for real-world applications, such as disaster recovery, remote areas coverage, and IoT data collection.
- Research Article
1
- 10.37391/ijeer.120250
- Jun 30, 2024
- International Journal of Electrical and Electronics Research
This research proposes a novel approach for efficient resource allocation in wireless communication systems. It combines dynamic neural networks, Proximal Policy Optimization (PPO), and Edge Computing Orchestrator (ECO) for latency-aware and energy-efficient resource allocation. The proposed system integrates multiple components, including a dynamic neural network, PPO, ECO, and a Mobile Edge Computing (MEC) server. The experimental methodology involves utilizing the NS-3 simulation platform to assess latency and energy efficiency in resource allocation within a wireless communication network, incorporating an ECO, MEC server, and dynamic task scheduling algorithms. It demonstrates a holistic and adaptable approach to resource allocation in dynamic environments, showcasing a notable reduction in latency for devices and tasks. Latency values range from 5 to 20 milliseconds, with corresponding resource utilization percentages varying between 80% and 95%. Additionally, energy-efficient resource allocation demonstrates a commendable reduction in energy consumption, with measured values ranging from 10 to 30 watts, coupled with efficient resource usage percentages ranging from 70% to 85%. These outcomes validate the efficacy of achieving both latency-aware and energy-efficient resource allocation for enhanced wireless communication systems. The proposed system has broad applications in healthcare, smart cities, IoT, real-time analytics, autonomous vehicles, and augmented reality, offering a valuable solution to optimize energy consumption, reduce latency, and enhance system efficiency in these industries.
- Research Article
81
- 10.1109/jiot.2018.2875670
- Feb 1, 2019
- IEEE Internet of Things Journal
Vehicular communication network is a core application scenario in the fifth generation (5G) mobile communication system which requires ultrahigh data rate and ultralow latency. Most recently, nonorthogonal multiple access (NOMA) has been regarded as a promising technique for future 5G systems due to its capability in significantly improving the spectral efficiency and reducing the data transmission latency. In this paper, we propose to introduce NOMA in device-to-device-enhanced vehicle-to-everything (V2X) networks, where resource sharing based on spatial reuse for different V2X communications are permitted through centralized resource management. Considering the complicated interference scenario caused by NOMA and spatial reuse-based resource sharing in the investigated NOMA-integrated V2X (NOMA-V2X) networks, we construct an interference hypergraph (IHG) to model the interference relationships among different communication groups. In addition, based on the constructed IHG, we further propose an IHG-based resource allocation (IHG-RA) scheme with cluster coloring algorithm, which can lead to both effective and efficient resource block assignment with low computational complexity. Simulation results verify the efficiency of our proposed IHG-RA scheme for NOMA-V2X communications in improving the network sum rate.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.