Primates exploring and exploiting a continuous sensorimotor space rely on dynamic maps in the dorsal stream. Two complementary perspectives exist on how these maps encode rewards. Reinforcement learning models integrate rewards incrementally over time, efficiently resolving the exploration/exploitation dilemma. Working memory buffer models explain rapid plasticity of parietal maps but lack a plausible exploration/exploitation policy. The reinforcement learning model presented here unifies both accounts, enabling rapid, information-compressing map updates and efficient transition from exploration to exploitation. As predicted by our model, activity in human frontoparietal dorsal stream regions, but not in MT+, tracks the number of competing options, as preferred options are selectively maintained on the map, while spatiotemporally distant alternatives are compressed out. When valuable new options are uncovered, posterior β1/α oscillations desynchronize within 0.4 to 0.7 s, consistent with option encoding by competing β1-stabilized subpopulations. Together, outcomes matching locally cached reward representations rapidly update parietal maps, biasing choices toward often-sampled, rewarded options.