A pipelined optical-flow processing system that works as a virtual motion sensor has been described. It is based on a field programmable gate array (FPGA) device enabling the easy change of configuring parameters to adapt the sensor to different speeds, light conditions and other environmental factors. It is referred to as a 'virtual sensor' because it consists of a conven- tional camera as front-end supported by an FPGA processing device, which embeds the frame grabber, optical-flow algorithm implementation, output module and some configuration and storage circuitry. This is the first fully stand-alone working optical-flow processing system to include both accuracy and speed of measurement of the platform performance. The customisability of the system for different hardware resources and platforms has also been discussed, showing the resources and performance for a stand-alone board and a PCI co-processing board. Optical flow computation consists in extracting a dense velocity field from an image sequence assuming that inten- sity is conserved during displacement. This result may then be used for other applications such as 3-D reconstruction, time interpolation of image sequences, video compression, segmentation from motion, tracking, robot navigation, time-to-collision estimation and so on. The technical problem with estimating the motion of objects in 3-D is that, in the image formation process, because of the perspec- tive projection of the 3-D world onto the 2-D image plane, some of the information is lost. There are several ways of recovering the 3-D information from 2-D images using various cues. These cues are motion, binocular stereopsis, texture, shading and contour. In this paper, we will describe the implementation of a real-time motion flow system, leaving the potential applications for future studies. Optical-flow algorithms have been widely described in the literature. Some authors have addressed a comparative study of the accuracy of different approaches with synthetic sequences (1). Their evaluation using real-life sequences is difficult to address because the real optical flow of such sequences is unknown. We have focused on a classical gradient model based on Lucas and Kanade's (L & K) approach (1, 2). Several authors have emphasised the satis- factory trade-off between accuracy and efficiency in this model, which is an important factor when deciding which model is most suitable to use as a real-time processing system. For a comparative study (1), the L & K algorithm provides very accurate results, added to which, other authors specifically evaluating the efficiency against accu- racy trade-off of different optical-flow approaches (3) also regard the L&K model as being quite efficient. Finally,