Recent interest in dynamic sound localisation models has created a need to better understand the head movements made by humans. Previous studies have shown that static head positions and small oscillations of the head obey Donders' law: for each facing direction there is one unique three-dimensional orientation. It is unclear whether this same constraint applies to audiovisual localisation, where head movement is unrestricted and subjects may rotate their heads depending on the available auditory information. In an auditory guided visual search task, human subjects were instructed to localise an audiovisual target within a field of visual distractors in the frontal hemisphere. During this task, head and torso movements were monitored using a motion capture system. Head rotations were found to follow Donders' law during search tasks. Individual differences were present in the amount of roll that subjects deployed, though there was no statistically significant improvement in model performance when including these individual differences in a gimbal model. The roll component of head rotation could therefore be predicted with a truncated Fick gimbal, which consists of a pitch axis nested within a yaw axis. This led to a reduction from three to two degrees of freedom when modelling head movement during localisation tasks.