Abstract

This chapter focuses on the history of neural networks as a logical calculus aimed at formalizing neural activity. After a brief prehistory of artificial intelligence (in the works of Leibniz), the early beginnings of neural networks and the main protagonists, McCulloch, Pitts, Lettvin and Weiner are presented in detail and their connections with Russell and Carnap explained. Their encounters at the University of Chicago and MIT are presented in detail, focusing both on historical facts and curiosities. The next section introduces Rosenblatt, who was the person who invented the first true learning rule for neural networks and Minsky and Papert who discovered one of the great problems for neural networks, the XOR problem. The next epoch presented are the 1980s, with the San Diego circle working on neural networks from a cognitive science aspect, and continuing to recent history and the birth of deep learning. Major trends behind the history of artificial intelligence and neural networks are explored, and placed both in a historical and systematic context, with an exploration of the philosophical aspects.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.