The vision-guided robot is intensively embedded in modern industry, but it is still a challenge to track moving objects in real time accurately. In this paper, a hybrid adaptive control scheme combined with an Extreme Learning Machine (ELM) and proportional–integral–derivative (PID) is proposed for dynamic visual tracking of the manipulator. The scheme extracts line features on the image plane based on a laser-camera system and determines an optimal control input to guide the robot, so that the image features are aligned with their desired positions. The observation and state–space equations are first determined by analyzing the motion features of the camera and the object. The system is then represented as an autoregressive moving average with extra input (ARMAX) and a valid estimation model. The adaptive predictor estimates online the relevant 3D parameters between the camera and the object, which are subsequently used to calculate the system sensitivity of the neural network. The ELM–PID controller is designed for adaptive adjustment of control parameters, and the scheme was validated on a physical robot platform. The experimental results showed that the proposed method’s vision-tracking control displayed superior performance to pure P and PID controllers.