Abstract

GENIUS MAKERS: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz. New York: Dutton, 2021. 371 pages including notes, references, and index. Hardcover; $28.00. ISBN: 9781524742676. *As Cade Metz says in the acknowledgments section, this is a book "not about the technology [of AI] but about the people building it ... I was lucky that the people I wanted to write about were so interesting and so eloquent and so completely different from one [an]other" (p. 314). *And, that's what this book is about. It is about people such as Geoff Hinton, founder of DNNresearch, who, once he reached his late fifties, never sat down because of his bad back. It is about others who came after him, including Yann LeCun, Ian Goodfellow, Andrew Ng, Yoshua Bengio, Jeff Dean, Jürgen Schmidhuber, Li Deng, Ilya Sutskever, Alex Krizhevsky, Demis Hassabis, and Shane Legg, each of whom had their strengths, weaknesses, and quirks. *The book also follows the development of interest in AI by companies like Google, Microsoft, Facebook, DeepMind, and OpenAI. DeepMind is perhaps the least known of these. It is the company, led by Demis Hassabis, that first made headlines by training a neural network to play old Atari games such as Space Invaders, Pong, and Breakout, using a new technique called reinforcement learning. It attracted a lot of attention from investors such as Elon Musk, Peter Thiel, and Google's Larry Page. *While most companies were interested in the application of AI to improve their products, DeepMind's goal was AGI, "Artificial General Intelligence"--technology that could do anything the human brain could do, only better. DeepMind was also the first company to take a stand on two issues: if the company was bought out (which it was, by Google), (1) their technology would not be used for military purposes, and (2) an independent ethics board would oversee the use of DeepMind's AGI technology, whenever that would arrive (p. 116). *Part One of the book, "A New Kind of Machine," follows the early players in the field as they navigate the early "AI winters," experiment with various new algorithms and technologies, and have breakthroughs and disappointments. From the beginning, there were clashes between personalities, collaboration and competition, and promises kept and broken. *Part Two of the book, titled "Who Owns Intelligence?," explores how many of the people named above were wooed by the different companies, and moved back and forth between them, sometimes working together and sometimes competing with each other. The companies understood the power of neural networks and deep learning, but they could not develop the technologies without the direction of the leading researchers, who were in limited supply. To woo the best researchers, the companies competed to develop exciting and show-stopping technology, such as self-driving cars and an AI to play (and beat) the best in Chess and Go. *In Part Three, "Turmoil," the author explores how the players began to realize the shortcomings and potentially dangerous effects of the AI systems. AI systems were becoming more and more capable in a variety of tasks. "Deep fakes" of celebrities and the auto-generation of fake news (often on Facebook) led many to question the direction AI was going. Ian Goodfellow said, "There's a lot of other areas where AI is opening doors that we've never opened before. And we don't really know what's on the other side" (p. 211). One surprising figure taking a stand on the side of caution was Elon Musk, giving repeated warnings of the possible rise of superintelligent actors. Further, it was discovered that the Chinese government was already using AI to do facial recognition and track its citizens as they moved about. *Other concerns dampened the community: it was discovered that small and unexpected flaws in training could have significant effects on the ability of an AI system to do its job. For example, "by slapping a few Post-it notes on a stop sign, [researchers] could fool a car into thinking it wasn't there" (p. 212). *Additionally, the biases in training data were being exposed, leading some to believe that AI systems would not equally benefit minority groups, and could even discriminate against them. Furthermore, Google was being approached by the US government to assist in the development of programs which could be used in warfare. Finally, Facebook was struggling to contain fake news and finding that even AIs could not effectively be used to combat it. *In the final sections of the book, the author explores the AI researchers' attitudes toward the future and the big questions. Will AI systems be able to eventually take over all work, even physical labor? Can the AI juggernaut be controlled and directed? Will AGI be fully realized? *This last question is explored in the chapter titled "Religion." "Belief in AGI required a leap of faith. But it drove some researchers forward in a very real way. It was something like a religion," said roboticist Sergey Levine (p. 290). The question of the feasibility of AGI continues to generate much debate, with one camp claiming that it is inevitable, while the other camp insisting that AI systems will excel only in limited tasks and environments. *As a Christian, I found the debates about the proper role of AI to be intriguing. Is the development of AGI inevitable? Should we as Christians petition companies and governments to have debates on the pursuit of AGI? Should we enact laws to limit or prohibit the use of AI in warfare? Should independent evaluators be required to review AI systems regarding discrimination? Should Christians participate in the further development of AGI? *Learning the histories and attitudes of the leading individuals in the development of AI also intrigued me. Many of the individuals seem to have very little concern for the potentially negative impact of their work. Their only motivation seems to be fame and fortune. It makes me wonder if the field of computer science should require all its practitioners to take ethics training like professional engineers are required to do. This book certainly confirms the importance of ethics in the field of computer science and the need for its practitioners to be people of virtue. *In summary, this was a different kind of book from many others in the field of technology. It was fascinating that so much of what I was reading about had happened in just the last ten years. Hearing the anecdotes of back-office meetings, public outcries, and false claims was intriguing. If you, like me, wonder how we got to where we are today in the area of AI, this is the book for you. *Reviewed by Victor T. Norman, Assistant Professor of Computer Science, Calvin University, Grand Rapids, MI 49546.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call