Federated learning (FL) enables multiple devices to collaboratively train a shared machine learning (ML) model while keeping all the local data private, which is a crucial enabler to implement artificial intelligence (AI) at the edge of the Industrial Internet of Things (IIoT) scenario. Distributed FL (DFL) based on Device-to-Device (D2D) communications can solve the single point of failure and scaling issue of centralized FL, but subject to the communication resource limitation of D2D links. Thus, it is crucial to reduce the data transmission volume of FL models between devices. In this article, we propose a quantization-based DFL (Q-DFL) mechanism in a D2D network and prove its convergence. Q-DFL contains two phases: 1) in phase I, a local model is trained with the stochastic gradient descent (SGD) algorithm on each IIoT device and then exchanges the quantified model parameters between neighboring nodes and 2) in phase II, a quantitative consensus mechanism is designed to ensure the local models converge to the same global model. We also propose an adaptive stopping mechanism and a synchronization protocol to fulfill the phase transition from phase I to phase II. Simulation results reveal that with Q-DFL, a 1-bit quantizer can be employed without affecting the model convergence at the price of slight accuracy reduction, which achieves significant transmission bandwidth saving. Further simulation of Q-DFL for the MobileNet model is fulfilled with different quantization bit levels, which reveals their performance tradeoff among the system information flow consumption, the system time delay, and the system energy cost.