Information about an event takes different amounts of time to be processed depending on which sensory system the event activates. However, despite the variations in processing time for lights and sounds, the point of subjective simultaneity (PSS) for briefly presented audio/visual stimuli is usually close to true simultaneity. Here we confirm that the simultaneity constancy mechanism that achieves this for audio/visual stimulus pairs is adaptable, and extend the investigation to other multimodal combinations. We measured the PSS and just noticeable differences (JNDs) for temporal order judgements for three stimulus combinations (sound/light, sound/touch, and light/touch) before and after repeated exposure to each one of these pairs presented with a 100 ms asynchrony (i.e., nine adapt-test combinations). Only the perception of simultaneity of the sound/light pair was affected by our exposure regime: the PSS shifted after exposure to either a temporally staggered sound/light or light/touch pair, and the JND decreased following exposure to a sound/touch pair. No changes were found in the PSSs or JNDs of sound/touch or light/touch pairs following exposure to any of the three time-staggered combinations. Participants' reaction times (RT) to the three stimuli were also tested before and after each adaptation exposure. In general, exposure did not affect attention or processing time; the only change in RTs (of the 9 tested) was an increased RT for light following exposure to a sound/light pair with light leading. We suggest that the neural correlates of multisensory sound/light processing are resynchronised by a separate, more flexible simultaneity constancy mechanism than the light/touch or the sound/touch simultaneity processing systems.