Deconvolutional neural network (DeCNN), such as fully convolutional network (FCN) and generative adversarial network (GAN), has shown great potential in various vision tasks. Convolution and deconvolution, the two major operations of DeCNN, both require real-time hardware acceleration. However, some previous designs for deconvolutions require large memory for overlapped results, while others incur computation imbalance and cause resource underutilization. In this article, we propose an efficient method to convert deconvolutions to convolutions, which enables balanced computations to make full use of processing elements. Based on the fast FIR algorithm, a reconfigurable conv–deconv unit (RCU) with low complexity is designed, which can support various types of convolutions and deconvolutions. By exploiting the computing characteristics of RCUs, a computation-balance scheme is developed to eliminate large memory requirements caused by overlapped results. In addition, a fast convolution architecture for deconvolutional network acceleration (F-DNA) is proposed. The dataflow of F-DNA improves the computation efficiency through input data reuse. The architecture is implemented on Xilinx Virtex-UltraScale, for two typical DeCNNs, DCGAN and FSRCNN. Implementation results show that the proposed design outperforms existing works significantly, particularly in terms of computation efficiency and memory requirements.