Abstract

Three-dimensional object detection is crucial for autonomous driving to understand the driving environment. Since the pooling operation causes information loss in the standard CNN, we designed a wavelet-multiresolution-analysis-based 3D object detection network without a pooling operation. Additionally, instead of using a single filter like the standard convolution, we used the lower-frequency and higher-frequency coefficients as a filter. These filters capture more relevant parts than a single filter, enlarging the receptive field. The model comprises a discrete wavelet transform (DWT) and an inverse wavelet transform (IWT) with skip connections to encourage feature reuse for contrasting and expanding layers. The IWT enriches the feature representation by fully recovering the lost details during the downsampling operation. Element-wise summation was used for the skip connections to decrease the computational burden. We trained the model for the Haar and Daubechies (Db4) wavelets. The two-level wavelet decomposition result shows that we can build a lightweight model without losing significant performance. The experimental results on KITTI’s BEV and 3D evaluation benchmark show that our model outperforms the PointPillars-based model by up to 14% while reducing the number of trainable parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call