Abstract

Memristor has been widely studied for hardware implementation of neural networks due to the advantages of nanometer size, low power consumption, fast switching speed and functional similarity to biological synapse. However, it is difficult to realize memristor-based deep neural networks for there exist a large number of network parameters in general structures such as LeNet, FCN, etc. To mitigate this problem, this paper aims to design a memristor-based sparse compact convolutional neural network (MSCCNN) to reduce the number of memristors. We firstly use an average pooling and $1\times 1$ convolutional layer to replace fully connected layers. Meanwhile, depthwise separation convolution is utilized to replace traditional convolution to further reduce the number of parameters. Furthermore, a network pruning method is adopted to remove the redundant memristor crossbars for depthwise separation convolutional layers. Therefore, a more compact network structure is obtained while the recognition accuracy remaining unchanged. Simulation results show that the designed model achieves superior accuracy rates while greatly reducing the scale of the hardware circuit. Compared with traditional designs of memristor-based CNN, our proposed model has smaller area and lower power consumption.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.