The computing power of nearshore and ship-borne devices is limited, posing significant challenges for accurately detecting objects in real-time on such devices. We propose a nearshore video object detector (NVID) to tackle these challenges. Considering the abundance of dynamic entities in the nearshore environment, we have developed you can look more (YCLM) to perceive the temporal characteristics of these objects. Furthermore, to improve the ability to detect objects of different sizes of networks, we designed parallel deformable attention (PDA) based on the spatial features of objects. More importantly, we developed fast re-parameterization convolution (FREConv) and faster conv (FConv). Building on these innovations, we proposed a fast re-parameterization network (FRENet) specifically tailored to produce low-parameter, multi-scale feature outputs. With end-to-end training, our pipeline outperforms other state-of-the-art (SOTA) methods on the nearshore objects (NearshoreObjects) dataset (90.4 average precision (AP) 50 (+4.7), 9.3 parameters (Params) (−1.0M), 24.8 frames per second (FPS) (Jetson Nano) (+0.6)). In addition, NVID also achieved excellent results in the on board (OnBoard) dataset (90.3 AP50 (+2.8), 9.3 params (−1.0M), 26.5 FPS (Jetson Nano) (+0.8)). The source code can be accessed at https://github.com/Yuanlin-Zhao/NVID.