Numerous studies have proposed that our adaptive motor behaviors depend on learning a map between sensory information and limb movement,1-3 called an "internal model." From this perspective, how the brain represents internal models is a critical issue in motor learning, especially regarding their association with spatial frames processed in motor planning.4,5 Extensive experimental evidence suggests that during planning stages for visually guided hand reaching, the brain transforms visual target representations in gaze-centered coordinates to motor commands in limb coordinates, via hand-target vectors in workspace coordinates.6-9 While numerous studies have intensively investigated whether the learning for reaching occurs in workspace or limb coordinates,10-20 the association of the learning with gaze coordinates still remains untested.21 Given the critical role of gaze-related spatial coding in reaching planning,22-26 the potential role of gaze states for learning is worth examining. Here, we show that motor memories for reaching are separately learned according to target location in gaze coordinates. Specifically, two opposing visuomotor rotations, which normally interfere with each other, can be simultaneously learned when each is associated with reaching to a foveal target and peripheral one. We also show that this gaze-dependent learning occurs in force-field adaptation. Furthermore, generalization of gaze-coupled reach adaptation is limited across central, right, and left visual fields. These results suggest that gaze states are available in the formation and recall of multiple internal models for reaching. Our findings provide novel evidence that a gaze-dependent spatial representation can provide a spatial coordinate framework for context-dependent motor learning.