In multi agent mobile robotic systems, coordinating the gaze of each autonomous robot without the use of predetermined gaze attractors requires the sharing of either a gaze vector together with knowledge of the location of each robotic or sharing of a visual description of what the gaze should be directed at. In the latter case, this should consist of the least amount of data possible to reduce transmission delay and processing requirements. Sharing a single unique visual feature is sufficient to direct gaze. Unfortunately, typical feature detection methods are tuned to detect many hundreds of features to maximize successful matches in corresponding images. This paper presents an alternate feature detection method that is tuned to select only a few, high quality features. Not only does this method achieve the accuracy of state of the art methods over the target range, it does so quicker and results in a significantly smaller amount of information that needs to be shared between agents for directing gaze to the same point of interest.