Omnidirectional vision sensors are mainly used for geometrical interpretation of scenes. However, few researchers have investigated how to perform object detection with such systems. The existing approaches require a geometrical transformation prior to the interpretation of the omnidirectional images. The face detection algorithm trained on perspective images is then applied on the unwrapped image. In this paper, we focus on how to process the omnidirectional images as provided by the sensor. While adapting algorithms developed for perspective images to omnidirectional images, our results suggest that the choice of descriptors is a critical step .