Abstract

The advent of neural network (NN) based deep learning, especially the recent development of the automatic design of networks, has brought unprecedented performance gains at heavy computational cost. On the other hand, in order to generate a new consensus block, Proof of Work (PoW) based blockchain systems routinely perform a huge amount of computation that does not achieve practical purposes but to solving a difficult cryptographic hash puzzle problem.In this study, we propose a new consensus mechanism, Proof of Learning (PoLe), which directs the computation spent for block consensus toward optimization of neural networks. In our design, the training and testing data are released to the entire blockchain network and the consensus nodes train NN models on the data, which serves as the proof of learning. As a core component of PoLe, we design a secure mapping layer (SML) to prevent consensus nodes from cheating, which can be straightforwardly implemented as a linear NN layer. When the consensus on the blockchain network is achieved, a new block is appended to the blockchain. We experimentally compare the PoLe protocol with Proof of Work (PoW) and show that PoLe can achieve a more stable block generation rate, which leads to more efficient transaction processing. Experimental evaluation also shows the PoLe can achieve a stable block generation rate without significantly sacrificing training performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call