The intention to act influences the computations of various task-relevant features. However, little is known about the time course of these computations. Furthermore, it is commonly held that these computations are governed by conjunctive neural representations of the features. But, support for this view comes from paradigms arbitrarily combining task features and affordances, thus, requiring representations in working memory. Therefore, the present study used electroencephalography and a well-rehearsed task with features that afford minimal working memory representations to investigate the temporal evolution of feature representations and their potential integration in the brain. Female and male human participants viewed and grasped objects or touched them with a knuckle. Objects had different shapes and were made of heavy or light materials with shape and weight being relevant for grasping, not for "knuckling." Using multivariate analysis showed that representations of object shape were similar for grasping and knuckling. However, only for grasping did early shape representations reactivate at later phases of grasp planning, suggesting that sensorimotor control signals feed back to early visual cortex. Grasp-specific representations of material/weight only arose during grasp execution after object contact during the load phase. A trend for integrated representations of shape and material also became grasp-specific but only briefly during Movement onset. These results suggest that the brain generates action-specific representations of relevant features as required for the different subcomponents of its action computations. Our results argue against the view that goal-directed actions inevitably join all features of a task into a sustained and unified neural representation.Significance statement The idea that all the features of a task are integrated into a joint representation or event file is widely supported but importantly based on paradigms with arbitrary stimulus-response combinations. Our study is the first to investigate grasping using electroencephalography to search for the neural basis of feature integration in such a daily-life task with overlearned stimulus-response mappings. Contrary to the notion of event files we find limited evidence for integrated representations. Instead, we find that task-relevant features form representations at specific phases of the action, suggesting that action intentions reactivate representations of relevant features. Our results show that integrated representations do not occur universally for any kind of goal-directed behaviour but in a manner of computation on demand.