The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these principles, there is mounting public concern over the influence that the AI systems have in our society, and coalitions in all sectors are organizing to resist harmful applications of AI worldwide. Responses from peoples everywhere, from workers protesting unethical conduct and applications of AI, to student's protesting MIT's relationships with donor, sex trafficker, and pedophile Jeffery Epstein, to the healthcare community, to indigenous people addressing “the twin problems of a lack of reliable data and information on indigenous peoples and biopiracy and misuse of their traditional knowledge and cultural heritage”, to smart city stakeholders, to many others. Like corporations, governments around the world have adopted strategies for becoming leaders in the development and use of Artificial Intelligence, fostering environments congenial to AI innovators. Neither corporations nor policymakers have sufficiently addressed how the rights of children fit into their AI strategies or products. The role of artificial intelligence in children’s lives—from how children play, to how they are educated, to how they consume information and learn about the world—is expected to increase exponentially over the coming years. Thus, it’s imperative that stakeholders evaluate the risks and assess opportunities to use artificial intelligence to maximize children’s wellbeing in a thoughtful and systematic manner. This paper discusses AI and children's rights in the context of social media platforms such as YouTube, smart toys, and AI education applications. The Hello Barbie, Cloud Pets, and Cayla smart toys case studies are analyzed, as well as the ElsaGate social media hacks and education's new Intelligent Tutoring Systems and surveillance of students apps. Though AI has valuable benefits for children, it presents some particular challenges around important issues including child safety, privacy, data privacy, device security and consent. Technology giants, all of whom are heavily investing in and profiting from AI, must not dominate the public discourse on responsible use of AI. We all need to shape the future of our core values and democratic institutions. As artificial intelligence continues to find its way into our daily lives, its propensity to interfere with our rights only gets more severe. Many of the issues mentioned in this examination of harmful AI are not new, but they are greatly exacerbated and threatened by the scale, proliferation, and real-life impact that artificial intelligence facilitates. The potential of artificial intelligence to both help and harm people is much greater than earlier technologies. Continuing to examine what safeguards and structures can address AI’s problems and harms, including those that disproportionately impact marginalized people, is a critical activity. There are assumptions embedded in the AI algorithms that will shape how our world is realized. Many of these algorithms are wrongful and biased, they must get locked-in. Our best human judgment is needed to contain AI's harmful impacts. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.