The depth of the deep neural network (DNN) refers to the number of hidden layers between the input and output layers of an artificial neural network. It usually indicates a certain degree of complexity of the computational cost (parameters and floating point operations per second) and expressiveness once the network structure is settled. In this study, we experimentally investigate the effectiveness of using neural ordinary differential equations (NODEs) as a component to provide further depth in a continuous way to relatively shallower networks rather than stacking more layers (discrete depth), which achieved an improvement with fewer parameters. Experiments are conducted on classic DNNs, the residual networks. Moreover, we construct infinite deep neural networks with flexible complexity based on NODEs, enabling the system to adjust its complexity during training. On a better hidden-space provided by adaptive step DNNs, adaptive step ResNet with NODE (ResODE) is managed to achieve better performances in terms of convergence and accuracy than standard networks, and the improvements are widely observed in popular benchmarks.