Abstract

Studies have shown that using robot pets in dementia care contributes to a reduction in loneliness and anxiety, and other benefits. Studies also show that, even when people know they are dealing with robots, they often treat the robot as though it is a real pet with genuine emotions. This disconnect between beliefs and behavior occurs not just for people living with dementia, but with cognitively healthy adults, including those who are knowledgeable about how robots work. One possible explanation is that robot pets prompt contradictory beliefs, and so the use of robot pets encourages self-deception. Sparrow argues that this makes the use of robot pets in dementia care morally objectionable. We disagree. We argue that Gendler's concept of alief offers a better explanation of the belief-behavior disconnect observed when people interact with robot pets. An alief is a mental state composed of an automatic, arational, emotional, and behavioral response to representational input. Aliefs are not beliefs and are not subject to truth norms. Thus, on our view, harms associated with the use of robot pets in dementia care are not likely to include the self-deceptions that Sparrow suggests. It might seem like philosophical hair-splitting to claim that deception has not occurred because discordant aliefs rather than false beliefs have been formed, but this distinction matters. We argue that aliefs carry their own risks. These risks are important to consider when using robot pets in dementia care.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call