The design of collaborative robotics, such as driver-assisted operations, engineer a potential automation of decision-making predicated on unobtrusive data gathering of human users. This form of ‘somatic surveillance’ (Hayles, Unthought: the power of the cognitive nonconscious. University of Chicago Press, Chicago, 2017, p. 11) increasingly relies on behavioural biometrics and sensory algorithms to verify the physiology of bodies in cabin interiors. Such processes secure cyber-physical space, but also register user capabilities for control that yield data as insured risk. In this technical re-formation of human–machine interactions for control and communication ‘a dissonance of attribution’ (Hancock et al., Proc Natl Acad Sci 116(16):7684, 2019. https://doi.org/10.1073/pnas.1805770115 ) is created between perceptions of phenomena, materials and decision-making. This reconfigures relations not only between humans and machines, objects and subjects, but possibly disrupts attributive functions in the social system of Law. What it requires is shifting a legal accountability for action from a sovereignty of the human to a new materialist account based on a ‘cognitive assemblage’ between physiological data, computation and algorithmic sensing. This paper investigates the function of law as a guidance system to acknowledge this account of sensory and algorithmic computation as autonomous ‘sensing agents’ (Hansen, Feed-forward: on the future of twenty-first-century media. University of Chicago Press, Chicago, 2015) that may be accountable in situations of risk. This assemblage of robotic computation and sensory determination requires a clearer legal differentiation across the current static terminologies of person, property, liability and rights that maintain strict separations of object from subject. To neglect this, we argue, law will solely impute attributions of error to humans despite evidence of operation via mutual control.