The use of terms like "collaboration" and "co-workers" to describe interactions between human beings and certain artificial intelligence (AI) systems has gained significant traction in recent years. Yet, it remains an open question whether such anthropomorphic metaphors provide either a fertile or even a purely innocuous lens through which to conceptualize designed commercial products. Rather, a respect for human dignity and the principle of transparency may require us to draw a sharp distinction between real and faux peers. At the heart of the concept of collaboration lies the assumption that the collaborating parties are (or behave as if they are) of similar status: two agents capable of comparable forms of intentional action, moral agency, or moral responsibility. In application to current AI systems, this not only seems to fail ontologically but also from a socio-political perspective. AI in the workplace is primarily an extension of capital, not of labor, and the AI "co-workers" of most individuals will likely be owned and operated by their employer. In this paper, we critically assess both the accuracy and desirability of using the term "collaboration" to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human-machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term "collaboration," exploring its strained relation to the concept of transparency, and consequences for the future of work.