Recent developments in artificial intelligence—AI--have caused considerable discussion among both philosophers of technology and psychotherapists. In particular, the question of whether or not new forms of AI will complement or even replace traditional psychotherapists has emerged as a major contemporary debate. This debate is not entirely new, as it has its origins in the Turing Test of 1950, and an early psychotherapy chatbot named Eliza, developed in 1966 at MIT. However, recent developments in AI technology, coupled with long waiting lists and variable access to psychotherapists have raised the question of machine psychotherapists in an urgent manner. Already, there are psychotherapy apps that one can download onto a standard smartphone and use in lieu of a human psychotherapist. In the near future, this simulacrum of a human therapist may be enhanced by the use of android therapists, programed to duplicate the knowledge and behavior of human therapists instantly, appealing to convenience or self-gratification through technology. This raises a host of ethical questions: can such beings be equally effective, and if so, ought we to reason in a consequentialist manner in their favor so as to increase accessibility and reduce costs through technology? Would there be a psychological difference between automated and potentially anthropomorphic therapy and genuine human therapy, if only a subtle one? Even if a chatbot or robot therapist is transparently such, is there an element of emotional manipulation and potential dishonesty in this interaction? Can the security of clients and their data be thus secured? I will argue that key aspects of chatbot psychotherapy present major ethical and clinical challenges in these areas, although transparent forms of it should not be legally banned. Keywords: chatbots, technological momentum, automation, psychotherapy, trust, anthropomorphism, data