Abstract

Mobile visual search systems typically compare a query image against a database of annotated images for accurate object recognition. On-server database matching can search a large database hosted in the cloud, but the query latency could suffer with slow network transmissions or server congestion . On-device database matching can ensure fast recognition responses regardless of network or server conditions , but a small amount of memory on the mobile device can severely limit the number of images that can be stored in an on- device database. This paper presents a new hybrid system that combines the advantages of on-device and on-server database matching. At the core of this system, we first develop a compact and discriminative global signature to characterize each image. Our global signature uses an optimized local feature count that is derived from a statistical analysis of the retrieval performance . We additionally create two extensions that exploit color information within images and relationships between similar database images to improve retrieval accuracy. Then, we propose methods for efficient interframe coding of a sequence of global signatures which are extracted from the viewfinder frames on the mobile device. A low bitrate stream of global signatures can be sent to the server at an uplink bitrate of less than 2 kbps to broaden the search range of the current query and to update the on-device database to help future queries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call