This paper introduces a novel system for information extraction from visually rich documents (VRD) using a weighted graph representation. The proposed method aims to improve the performance of the information extraction task by capturing the relationships between various VRD components. The VRD is modeled as a weighted graph, in which visual, textual, and spatial features of text regions are encoded in nodes and edges representing the relationships between neighboring text regions. The information extraction task from VRD is performed as a node classification task through the use of a graph convolutional networks, where the VRD graphs are fed into the network. The approach is evaluated across diverse documents, encompassing invoices and receipts, revealing achievement levels equal to or surpassing robust baselines.