Deep residual network (ResNet) had shown remarkable performance in image recognition tasks, due to its shorter connections between layers close to the input and those close to the output. And densely connected convolutional network (DenseNet) had further improved recognition performance by dense feature reuses. To improve the performance of residual units in ResNet and feature reuses in DenseNet, we propose a simple and effective convolutional network architecture, named double reuses based residual network (DRRNet). DRRNet improves the residual unit of ResNet, where the feature reuse connections are added to combine all feature maps from the convolutional layers to produce the residual, and uses a residual reuse path outside units to reuse all residuals as the final feature maps for classification. Residual learning used in DRRNet can alleviate the vanishing-gradient problem. The double reuses including inner-unit feature reuses and outer-unit residual reuses effectively decrease computational cost as compared with dense connections in DenseNet, and further strengthen the forward feature propagation. DRRNet is evaluated on three object recognition benchmark datasets and an object detection dataset. In comparison with the state-of-the-art, DRRNet achieves a good balance between classification accuracy and computational cost, and optimal detection performance.
Read full abstract