In recent years, machine learning models have been increasingly used to detect security vulnerabilities in software, due to their ability to achieve high performance and lower false positive rates compared to traditional program analysis tools. However, these models often lack the capability to provide a clear explanation for why a program has been flagged as vulnerable, leaving developers with little reasoning to work with. We present a new method which not only identifies the presence of vulnerabilities in a program, but also the specific type of error, considering the whole program rather than just individual functions. Our approach utilizes graph neural networks that employ inter-procedural value flow graphs, and instruction embedding from the LLVM Intermediate Representation, to predict a class. By mapping these classes to the Common Weakness Enumeration list, we provide a clear indication of the security issue found, saving developers valuable time which would otherwise be spent analyzing a binary vulnerable/non-vulnerable label. To evaluate our method’s effectiveness, we used two datasets: one containing memory-related errors (out of bound array accesses), and the other a range of vulnerabilities from the Juliet Test Suite, including buffer and integer overflows, format strings, and invalid frees. Our model, implemented using PyTorch and the Gated Graph Sequence Neural Network from Torch-Geometric, achieved a precision of 96.35 and 91.59% on the two datasets, respectively. Compared to common static analysis tools, our method produced roughly half the number of false positives, while identifying approximately three times the number of vulnerable samples. Compared to recent machine learning systems, we achieve similar performance while offering the added benefit of differentiating between classes. Overall, our approach represents a meaningful improvement in software vulnerability detection, providing developers with valuable insights to better secure their code.