Graph Neural Network based Initialization for Timing Driven Placement

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Timing-driven placement is very important to achieve timing closure especially as designs become increasingly complex. This paper presents a novel Timing-Driven Placement (TDP) framework that integrates a Graph Convolutional Neural Network (GNN), Dirichlet boundary conditions, and a nonlinear placement engine to optimize placement quality with timing awareness throughout the flow. The proposed methodology begins by clustering components based on their interconnection topology, while Dirichlet boundary conditions are applied to handle fixed components such as IOs and macros. This yields a reduced graph with minimized inter-cluster connectivity, simplifying timing optimization. A GNN is then trained to learn a generalized and optimized mapping from circuit connectivity to physical wirelength.To improve early-stage timing estimation, virtual buffers are inserted prior to Static Timing Analysis (STA) to eliminate maximum capacitance violations. With this improved timing fidelity, STA provides pin-level slack, which is then used to dynamically adjust interconnection weights, guiding the placement of timing-critical components toward improved timing closure. Experimental results on ICCAD2015 contest benchmarks demonstrate that our algorithm can improve worse negative slack and total negative slack by 6% compared to the state-of-the-art method.

Similar Papers
  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.2681612
Hybrid text classification model based on graph convolution network and neural network
  • Jun 1, 2023
  • Zhaohe Dong + 2 more

With the rapid development of graph neural network technology, its application in the field of natural language processing is more and more extensive, text classification is one of the important applications, everyday life will produce a large number of non-Euclidean text data, while the traditional classification methods in the graphic structure of text data has been a great challenge. Graph convolutional neural network(GCN) is considered to be able to model the structural attributes and node feature information of graphs well, and is gradually becoming a good choice for text classification of graph data. This paper proposes a text classification model based on graph convolution network and neural network local enhancement. On the basis of using GCN to extract features, Bi-LSTM method is used to balance the experimental results, enrich the feature information by capturing local information, integrate the attention mechanism, and fuse the evaluation values to improve the accuracy of classification. It is verified that this method has achieved better results than the existing classification methods in many classical data sets such as 20NG and OHSUMED.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.jneumeth.2024.110276
Cross-subject emotion recognition in brain-computer interface based on frequency band attention graph convolutional adversarial neural networks
  • Sep 3, 2024
  • Journal of Neuroscience Methods
  • Shinan Chen + 5 more

Cross-subject emotion recognition in brain-computer interface based on frequency band attention graph convolutional adversarial neural networks

  • Conference Article
  • Cite Count Icon 32
  • 10.1145/2717764.2717766
Timing-Driven Placement Based on Dynamic Net-Weighting for Efficient Slack Histogram Compression
  • Mar 29, 2015
  • Chrystian Guth + 5 more

Timing-driven placement (TDP) finds new legal locations for standard cells so as to minimize timing violations while preserving placement quality. Although violations may arise from unmet setup or hold constraints, most TDP approaches ignore the latter. Besides, most techniques focus on reducing the worst negative slack and let the improvements on total negative slack as a secondary goal. However, to successfully achieve timing closure, techniques must also reduce the total negative slack, which is known as slack histogram compression. This paper proposes a new Lagrangian Relaxation formulation for TDP to compress both late and early slack histograms. To solve the problem, we employ a discrete local search technique that uses the Lagrange multipliers as net-weights, which are dynamically updated using an accurate timing analyzer. To preserve placement quality, our technique uses a small fixed-size window that is anchored in the initial location of a cell. For the experimental evaluation of the proposed technique, we relied on the ICCAD 2014 TDP contest infrastructure. The results show that our technique significantly reduces the timing violations from an initial global placement. On average, late and early total negative slacks are improved by 85.03% and 42.72%, respectively, while the worst slacks are reduced by 71.55% and 34.40%. The overhead in wirelength is less than 0.1%.

  • Research Article
  • Cite Count Icon 9
  • 10.1109/tcss.2022.3167856
An Autoregressive Graph Convolutional Long Short-Term Memory Hybrid Neural Network for Accurate Prediction of COVID-19 Cases
  • Apr 1, 2023
  • IEEE Transactions on Computational Social Systems
  • Myrsini Ntemi + 2 more

Efficient prediction of COVID-19 cases could prepare the healthcare system to accommodate the COVID-19 cases in the forthcoming days and improve the overall resource management. A hybrid model comprised of an autoregressive filter, a graph convolutional neural network (GCN), and a long short-term memory neural network is proposed for COVID-19 cases prediction in USA. It captures accurately both linearities and nonlinearities present in the time series. An adjacency matrix is exploited in GCN that relies on Granger causality tests applied to historical COVID-19 cases for each state in USA. By doing so, the latent information about the spread of the virus is captured efficiently and the prediction performance of the hybrid model is improved, revealing which state truly affects the other ones. The proposed method outperforms the state-of-the-art techniques.

  • Research Article
  • Cite Count Icon 5
  • 10.1145/2858793
Clock-Tree-Aware Incremental Timing-Driven Placement
  • Apr 19, 2016
  • ACM Transactions on Design Automation of Electronic Systems
  • Vinicius Livramento + 4 more

The increasing impact of interconnections on overall circuit performance makes timing-driven placement (TDP) a crucial step toward timing closure. Current TDP techniques improve critical paths but overlook the impact of register placement on clock tree quality. On the other hand, register placement techniques found in the literature mainly focus on power consumption, disregarding timing and routabilty. Indeed, postponing register placement may undermine the optimization achieved by TDP, since the wiring between sequential and combinational elements would be touched. This work proposes a new approach for an effective coupling between register placement and TDP that relies on two key aspects to handle sequential and combinational elements separately: only the registers in the critical paths are touched by TDP (in practice they represent a small percentage of the total number of registers), and the shortening of clock tree wirelength can be obtained with limited variation in signal wirelength and placement density. The approach consists of two steps: (1) incremental register placement guided by a virtual clock tree to reduce clock wiring capacitance while preserving signal wirelength and density, and (2) incremental TDP to minimize the total negative slack. For the first step, we propose a novel technique that combines clock-net contraction and register clustering forces to reduce the clock wirelength. For the second step, we propose a novel Lagrangian Relaxation formulation that minimizes total negative slack for both setup and hold timing violations. To solve the formulation, we propose a TDP technique using a novel discrete search that employs a Euclidean distance to define a proper neighborhood. For the experimental evaluation of the proposed approach, we relied on the ICCAD 2014 TDP contest infrastructure and compared our results with the best results obtained from that contest in terms of timing closure, clock tree compactness, signal wirelength, and density. Assuming a long displacement constraint, our technique achieves worst and total negative slack reductions of around 24% and 26%, respectively. In addition, our approach leads to 44% shorter clock tree wirelength with negligible impact on signal wirelength and placement density. In the face of such results, the proposed coupling seems a useful approach to handle the challenges faced by contemporary physical synthesis.

  • Research Article
  • 10.52783/jisem.v10i48s.9833
PBA Based Optimization for Slew Propagation
  • May 19, 2025
  • Journal of Information Systems Engineering and Management
  • Pragati Agarwal

When we move towards the nanometer designs, the interconnections between the semiconductor devices. At this level the performance of the design is affected by the metal interconnects due to induced noise. So, in nanometer designs all these affects should also be considered. Propagation of slew in design is an important aspect while performing the static timing analysis (STA) of a design. Slew has a direct impact on delay of a timing path and could make the design pass or fail the timing closure. In conflicts of slew propagation, there are two approaches to move forward. The first approach is graph-based static timing analysis. In a graph-based approach, the worst case delays are taken into consideration by taking into the account the worst case slews (slow slews) along the timing paths, for setup analysis. While in case of hold analysis, the best case delays are taken into consideration by taking into the account the best case slews (fast slews) along the timing paths. The second approach is path-based static timing analysis. In this approach, the actual delays are taken into consideration by taking into account the actual slews along the timing paths, for setup as well as hold analysis. In path-based static timing analysis, delay is computed of the timing path so as to obtain the actual delay values. Such delay calculation takes some extra amount of time. The propagation of slew at various slew merging points in a design have been observed with the help of timing reports. When a path based approach is applied on a design, a significant improvement in the TNS (Total Negative Slack) and WNS (Worst Negative Slack) could be seen in comparison to the graph based approach. When such analysis is done on various designs, a significant improvement was observed. With this path based approach, the area required to implement the design reduced significantly. This is obtained by obtaining the number of cells added during optimization, which comes out to be less in the case of path based approach. Thus, path based approach not only reduce the TNS and WNS, but also helps in congestion reduction in a design by decreasing the area used for implementation of design.

  • Research Article
  • Cite Count Icon 3
  • 10.1155/2022/2276318
A Small Sample Recognition Model for Poisonous and Edible Mushrooms based on Graph Convolutional Neural Network.
  • Aug 12, 2022
  • Computational intelligence and neuroscience
  • Li Zhu + 3 more

The automatic identification of disease types of edible mushroom crops and poisonous crops is of great significance for improving crop yield and quality. Based on the graph convolutional neural network theory, this paper constructs a graph convolutional network model for the identification of poisonous crops and edible fungi. By constructing 6 graph convolutional networks with different depths, the model uses the training mechanism of graph convolutional networks to analyze the results of disease identification and completes the automatic extraction of the disease characteristics of the poisonous crops by overfitting problem. During the simulation, firstly, the relevant PlantVillage dataset is used to obtain the pretrained model, and the parameters are adjusted to fit the dataset. The network framework is trained and parameterized with prior knowledge learned from large datasets and finally synthesized by training multiple neural network models and using direct averaging and weighting to synthesize their predictions. The experimental results show that the graph convolutional neural network model that integrates multi-scale category relationships and dense links can use dense connection technology to improve the representation ability and generalization ability of the model, and the accuracy rate generally increases by 1%–10%. The average recognition rate is about 91%, which greatly promotes the ability to identify the diseases of poisonous crops.

  • Research Article
  • Cite Count Icon 32
  • 10.1016/j.jksuci.2021.10.001
IVaccine-Deep: Prediction of COVID-19 mRNA vaccine degradation using deep learning
  • Oct 13, 2021
  • Journal of King Saud University. Computer and information sciences
  • Amgad Muneer + 4 more

Messenger RNA (mRNA) has emerged as a critical global technology that requires global joint efforts from different entities to develop a COVID-19 vaccine. However, the chemical properties of RNA pose a challenge in utilizing mRNA as a vaccine candidate. For instance, the molecules are prone to degradation, which has a negative impact on the distribution of mRNA among patients. In addition, little is known of the degradation properties of individual RNA bases in a molecule. Therefore, this study aims to investigate whether a hybrid deep learning can predict RNA degradation from RNA sequences. Two deep hybrid neural network models were proposed, namely GCN_GRU and GCN_CNN. The first model is based on graph convolutional neural networks (GCNs) and gated recurrent unit (GRU). The second model is based on GCN and convolutional neural networks (CNNs). Both models were computed over the structural graph of the mRNA molecule. The experimental results showed that GCN_GRU hybrid model outperform GCN_CNN model by a large margin during the test time. Validation of proposed hybrid models is performed by well-known evaluation measures. Among different deep neural networks, GCN_GRU based model achieved best scores on both public and private MCRMSE test scores with 0.22614 and 0.34152, respectively. Finally, GCN_GRU pre-trained model has achieved the highest AuC score of 0.938. Such proven outperformance of GCNs indicates that modeling RNA molecules using graphs is critical in understanding molecule degradation mechanisms, which helps in minimizing the aforementioned issues. To show the importance of the proposed GCN_GRU hybrid model, in silico experiments has been contacted. The in-silico results showed that our model pays local attention when predicting a given position’s reactivity and exhibits interesting behavior on neighboring bases in the sequence.

  • Research Article
  • Cite Count Icon 8
  • 10.2174/1574893618666230316113621
Graph Convolutional Neural Network with Multi-Layer Attention Mechanism for Predicting Potential Microbe-Disease Associations
  • Jul 1, 2023
  • Current Bioinformatics
  • Lei Wang + 5 more

Background:Human microbial communities play an important role in some physiological process of human beings. Nevertheless, the identification of microbe-disease associations through biological experiments is costly and time-consuming. Hence, the development of calculation models is meaningful to infer latent associations between microbes and diseases.Aims:In this manuscript, we aim to design a computational model based on the Graph Convolutional Neural Network with Multi-layer Attention mechanism, called GCNMA, to infer latent microbe-disease associations.Objective:This study aims to propose a novel computational model based on the Graph Convolutional Neural Network with Multi-layer Attention mechanism, called GCNMA, to detect potential microbedisease associations.Methods:In GCNMA, the known microbe-disease association network was first integrated with the microbe- microbe similarity network and the disease-disease similarity network into a heterogeneous network first. Subsequently, the graph convolutional neural network was implemented to extract embedding features of each layer for microbes and diseases respectively. Thereafter, these embedding features of each layer were fused together by adopting the multi-layer attention mechanism derived from the graph convolutional neural network, based on which, a bilinear decoder would be further utilized to infer possible associations between microbes and diseases.Results:Finally, to evaluate the predictive ability of GCNMA, intensive experiments were done and compared results with eight state-of-the-art methods which demonstrated that under the frameworks of both 2-fold cross-validations and 5-fold cross-validations, GCNMA can achieve satisfactory prediction performance based on different databases including HMDAD and Disbiome simultaneously. Moreover, case studies on three kinds of common diseases such as asthma, type 2 diabetes, and inflammatory bowel disease verified the effectiveness of GCNMA as well.Conclusion:GCNMA outperformed 8 state-of-the-art competitive methods based on the benchmarks of both HMDAD and Disbiome.

  • Research Article
  • 10.1002/acs.70025
Enhanced Fault Detection in Induction Motors via Complex‐Value Spatio‐Temporal Graph Convolutional Neural Networks and High Level Target Navigation Pigeon Inspired Optimization
  • Mar 10, 2026
  • International Journal of Adaptive Control and Signal Processing
  • M S Saranya + 1 more

Induction Motors (IM) are ideal for a wide variety of industrial applications because they require less maintenance and run with strength and durability. Fault detection in IM is critical for ensuring industrial system dependability and preventing unexpected downtime and expensive repairs. Traditional fault detection approaches frequently struggle to capture the complex geographical and temporal patterns inherent in motor fault signals, which limit their efficiency. To address these challenges, this manuscript proposes a novel approach that integrates Complex‐Value Spatio‐Temporal Graph Convolutional Neural Networks (CVSTGCNN) with High‐level Target Navigation Pigeon Inspired Optimization (HTNPIO) for optimized fault detection and classification, named CVSTGCNN‐HTNPIO. The CVSTGCNN effectively models the intricate Spatio‐temporal dependencies in induction motor fault data, while HTNPIO optimizes the network parameters to enhance detection accuracy and efficiency. The major goal of the proposed technique is to differentiate between electrical issues that arise in IM under both faulty and healthy circumstances. When tested on a benchmark induction motor fault dataset, the proposed CVSTGCNN‐HTNPIO method achieves a high classification accuracy of 99.8% and reduces the root mean square error (RMSE) to 0.02%, outperforming existing techniques such as Spatial–Temporal Recurrent Graph Neural Networks (STRGNN), Multi‐Parallel Graph Convolutional Network (MGCN), Particle Swarm Optimization (PSO), Back Propagation Neural Networks (BPNN), and Artificial Neural Networks (ANN). These findings illustrate the method's enhanced capacity to identify various fault types with more precision, allowing for more reliable and timely motor fault diagnostics. This development has the potential to greatly improve motor operational safety, lower maintenance costs, and increase equipment longevity.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-030-96737-6_12
Anomaly Detection on Static and Dynamic Graphs Using Graph Convolutional Neural Networks
  • Jan 1, 2022
  • Amani Abou Rida + 2 more

Anomalies represent rare observations that vary significantly from others. Anomaly detection intended to discover these rare observations and has the power to prevent detrimental events, such as financial fraud, network intrusion, and social spam. However, conventional anomaly detection methods cannot handle this problem well because of the complexity of graph data (e.g., irregular structures, relational dependencies, node/edge types/attributes/directions/multiplicities/weights, large scale, etc.) (Ma X, Wu J, Xue S, Yang J, Zhou C, Sheng QZ, Xiong H, Akoglu L. IEEE Trans Knowl Data Eng, 2021 [1]). Thanks to the rise of deep learning in solving these limitations, graph anomaly detection with deep learning has obtained an increasing attention from many scientists recently. However, while deep learning can capture unseen patterns of multi-dimensional Euclidean data, there is a huge number of applications where data are represented in the form of graphs. Graphs have been used to represent the structural relational information, which raises the graph anomaly detection problem—identifying anomalous graph objects (i.e., vertex, edges, sub-graphs, and change detection). These graphs can be constructed as a static graph, or a dynamic graph based on the availability of timestamp. Recent years have observed a huge efforts on static graphs, among which Graph Convolutional Network (GCN) has appeared as a useful class of models. A challenge today is to detect anomalies with dynamic structures. In this chapter, we aim at providing methods used for detecting anomalies in static and dynamic graphs using graph analysis, graph embedding, and graph convolutional neural networks. For static graphs we categorize these methods according to plain and attribute static graphs. For dynamic graphs we categorize existing methods according to the type of anomalies that they can detect. Moreover, we focus on the challenges in this research area and discuss the strengths and weaknesses of various methods in each category. Finally, we provide open challenges for graph anomaly detection using graph convolutional neural networks on dynamic graphs.KeywordsAnomaly detectionGraph anomaly detectionGraph analysisGraph embeddingGraph neural networkDynamic graphsStatic graphs

  • Research Article
  • Cite Count Icon 1
  • 10.46300/9106.2021.15.97
Multi-type Parameter Prediction of Traffic Flow Based on Time-space Attention Graph Convolutional Network
  • Aug 11, 2021
  • International Journal of Circuits, Systems and Signal Processing
  • Guoxing Zhang + 2 more

Graph Convolutional Neural Networks are more and more widely used in traffic flow parameter prediction tasks by virtue of their excellent non-Euclidean spatial feature extraction capabilities. However, most graph convolutional neural networks are only used to predict one type of traffic flow parameter. This means that the proposed graph convolutional neural network may only be effective for specific parameters of specific travel modes. In order to improve the universality of graph convolutional neural networks. By embedding time feature and spatio-temporal attention layer, we propose a spatio-temporal attention graph convolutional neural network based on the attention mechanism of the neural network. Through experiments on passenger flow data and vehicle speed data of two different travel modes (Hangzhou Metro Data and California Highway Data), it is verified that the proposed spatio-temporal attention graph convolutional neural network can be used to predict passenger flow and vehicle speed simultaneously. Meanwhile, the error distribution range of the proposed model is minimum, and the overall level of prediction results is more accurate.

  • Conference Article
  • Cite Count Icon 8
  • 10.23919/date56975.2023.10137076
FPGA Acceleration of GCN in Light of the Symmetry of Graph Adjacency Matrix
  • Apr 1, 2023
  • Gopikrishnan Raveendran Nair + 5 more

Graph Convolutional Neural Networks (GCNs) are widely used to process large-scale graph data. Different from deep neural networks (DNNs), GCNs are sparse, irregular, and unstructured, posing unique challenges to hardware acceleration with regular processing elements (PEs). In particular, the adja-cency matrix of a GCN is extremely sparse, leading to frequent but irregular memory access, low spatial/temporal data locality and poor data reuse. Furthermore, a realistic graph usually consists of unstructured data (e.g., unbalanced distributions), creating significantly different processing times and imbalanced workload for each node in GCN acceleration. To overcome these challenges, we propose an end-to-end hardware-software co-design to accelerate GCNs on resource-constrained FPGAs with the features including: (1) A custom dataflow that leverages symmetry along the diagonal of the adjacency matrix to accelerate feature aggregation for undirected graphs. We utilize either the upper or the lower triangular matrix of the adjacency matrix to perform aggregation in GCN to improve data reuse. (2) Unified compute cores for both aggregation and transform phases, with full support to the symmetry-based dataflow. These cores can be dynamically reconfigured to the systolic mode for transformation or as individual accumulators for aggregation in GCN processing. (3) Preprocessing of the graph in software to rearrange the edges and features to match the custom dataflow. This step improves the regularity in memory access and data reuse in the aggregation phase. Moreover, we quantize the GCN precision from FP32 to INT8 to reduce the memory footprint without losing the inference accuracy. We implement our accelerator design in Intel Stratix10 MX FPGA board with HBM2, and demonstrate <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$1.3\times-110.5\times$</tex> improvement in end-to-end GCN latency as compared to the state-of the-art FPGA implementations, on the graph datasets of Cora, Pubmed, Citeseer and Reddit.

  • Research Article
  • Cite Count Icon 8
  • 10.1109/tc.2022.3207127
Multi-node Acceleration for Large-scale GCNs
  • Jan 1, 2022
  • IEEE Transactions on Computers
  • Gongjian Sun + 7 more

Limited by the memory capacity and compute power, singe-node graph convolutional neural network (GCN) accelerators cannot complete the execution of GCNs within a reasonable amount of time, due to the explosive size of graphs nowadays. Thus, large-scale GCNs call for a multi-node acceleration system (MultiAccSys) like TPU-Pod for large-scale neural networks. In this work, we aim to scale up single-node GCN accelerators to accelerate GCNs on large-scale graphs. We first identify the communication pattern and challenges of multi-node acceleration for GCNs on large-scale graphs. We observe that (1) coarse-grained communication patterns exist in the execution of GCNs in MultiAccSys, which introduces massive amount of redundant network transmissions and off-chip memory accesses; (2) overall, the acceleration of GCNs in MultiAccSys is bandwidth-bound and latency-tolerant. Guided by these two observations, we then propose MultiGCN, the first MultiAccSys for large-scale GCNs that trades network latency for network bandwidth. Specifically, by leveraging the network latency tolerance, we <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">first</i> propose a topology-aware multicast mechanism with a one <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">put</monospace> per <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">multicast</monospace> message-passing model to reduce transmissions and alleviate network bandwidth requirements. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Second</i> , we introduce a scatter-based round execution mechanism which cooperates with the multicast mechanism and reduces redundant off-chip memory accesses. Compared to the baseline MultiAccSys, MultiGCN achieves 4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\sim 12\times$</tex-math></inline-formula> speedup using only 28% <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\sim$</tex-math></inline-formula> 68% energy, while reducing 32% transmissions and 73% off-chip memory accesses on average. It not only achieves 2.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\sim 8\times$</tex-math></inline-formula> speedup over the state-of-the-art multi-GPU solution, but also scales to large-scale graphs as opposed to single-node GCN accelerators.

  • Conference Article
  • Cite Count Icon 17
  • 10.1109/icc42927.2021.9500687
GCRINT: Network Traffic Imputation Using Graph Convolutional Recurrent Neural Network
  • Jun 1, 2021
  • Van An Le + 5 more

Missing values appear in most multivariate time series, especially in the monitored network traffic data due to high measurement cost and unavoidable loss. In the networking fields, missing data prevents advanced analysis and downgrades downstream applications such as traffic engineering and anomaly detection. Despite the great potential, existing imputation approaches based on tensor decomposition and deep learning techniques have shown limitations in addressing missing values of traffic data due to its dynamic behavior. In this paper, we propose Graph Convolutional Recurrent Neural Network for Imputing Network Traffic (GCRINT), a combination between Recurrent Neural Network (RNN) and Graph Convolutional Neural Network, for filling the missing values of network traffic data. We use a bidirectional Long Short-Term Memory network and Graph Neural Network to efficiently learn the spatial-temporal correlations in partially observed data. We conducted extensive experiments to evaluate our model by using two different datasets and various missing scenarios. The experiment results show that GCRINT achieves significantly low imputation errors and reduces the error by 35% compared to the state-of-the-art methods. GCRINT also helps to obtain a stable performance in the traffic engineering problem.

Save Icon
Up Arrow
Open/Close