History of AI Part 2

CS123, Intro to AI

Topics 
Overview of AINeural networks and deep learning
AI Problem Solving Revisited
Machine LearningPart 1
Applications of AI
Generative AI + Prompt engineering
Machine LearningPart 2Custom chatbot creation
History of AI + MidtermSocial and ethical issues of AI
Final

 

Table of Contents

Introduction

What's Happening this Week

 

Marvin Minsky, ANNs and the MIT AI Lab

Marvin Lee Minsky was an American cognitive and computer scientist whois often referred to as one of the fathers of AI. He defined AI as “the science of making machines do things that would require intelligence if done by men”.

1951: While studying mathematics at Princeton, Minsky built the first learning machine, an artificial neural network (ANN) built from vacuum tubes called the Stochastic Neural Analog Reinforcement Calculator, or SNARC. It consisted of 40 artificial Hebbs Synapses.

HebbSynapse

Hebbs synapse image by Gregory Loan.

1959: He co-founded the Massachusetts Institute of Technology's AI laboratory.

Frank Rosenblatt and the Perceptron

1957: The perceptron, designed by by Frank Rosenblatt, was based on the McCulloch–Pitts mathematical model of a neuron (1943). It was a system for supervised machine learning for binary classifiers. The first Perceptron, known as the Mark I, was a combination of software that ran on an IBM 7094 and custom hardware consisting of transisterized circuits. It was built at the Cornell Aeronautical Laboratory which was affiliated with Cornell University in New York.

FrankRosenblattWiringPerceptron

Frank Rosenblatt working on wiring for a perceptron.

This was an early example of connectionism which was a competing approach to symbolism the dominant approach to AI at the time.

The perceptron and connectionism were notably criticized by Marvin Minsky and Seymour Papert. They mounted their critique in their 1969 book titled Perceptrons. In this book, they argued that the perceptron had severe limitations. Their critique contributed to a decrease in enthusiasm and funding for perceptron research, marking the beginning of what is known as the "AI winter".

 

AI winter

In 1974, the applied mathematician Sir James Lighthill published a critical report on academic AI research, claiming that researchers had essentially over-promised and under-delivered when it came to the potential intelligence of machines. His condemnation resulted in stark funding cuts.

The period between the late 1970s and early 1990s signaled an “AI winter”—a term first used in 1984—that referred to the gap between AI expectations and the technology’s shortcomings.
(From The History of AI: A Timeline of Artificial Intelligence)

Geoffrey Hinton makes ANNs Cool Again

1986: British-Canadian Carnegie Mellon professor and computer scientist Geoffrey Hinton, often referred to as the "godfather of AI", was among several researchers who helped make neural networks cool again by demonstrating that they could be trained using backpropagation for improved image recognition and word prediction.

Geoffrey_Hinton_at_Collision_2024_in_Toronto

Photo by Vaughn Ridley/Collision via Sportsfile - Collision Conf, CC BY 2.0

The 2024 Nobel Prize in physics was awarded to John Hopfield and Geoffrey Hinton for their fundamental discoveries in machine learning, which paved the way for modern AI.

Hinton has frequently spoken publicly about the potential risks and benefits of AI. Here are some of the risks:

Hinton emphasizes the potential for AI to do enormous good, here are some of the good things AI could do:

 

IBM Watson becomes a Jeopardy Champion

In 2011, An IBM computer system named Watson beat two of the show’s all-time champions, Ken Jennings and Brad Rutter.

The original Watson was a room-size computer consisting of 10 racks holding 90 servers, with a total of 2,880 processor cores. Watson was trained on information from Wikipedia, encyclopedias, dictionaries, religious texts, novels, plays, and books from Project Gutenberg, among other sources.

Watson’s architecture, known as DeepQA, utilized over 100 different algorithms and techniques to analyze questions, generate hypotheses, and evaluate evidence. The more of its algorithms that independently arrived at the same answer, the higher Watson’s confidence level. If the confidence level was high enough, Watson was programmed to buzz in during a game of Jeopardy. It took Watson an average of 3 seconds to come up with an answer.

Siri, Alexa and Google Assistant

In 2011, Apple demoed a virtual assistant named Siri. In 2014, Amazon released its virtual assistant named Alexa and in 2016 Google released Google Assistant. All three had natural language processing capabilities and could understand a spoken question and respond with an answer. But, they contained limitationsthey used “command-and-control systems,” which are programmed to understand a long list of questions but cannot answer anything that falls outside their programming.

Fei Fei Lee and ImageNet

Fei-Fei Li is a, Chinese-American computer scientist is known as the "godmother of artificial intelligence"

In 2009 she and her team created ImageNet, a large-scale dataset that has been instrumental in advancing computer vision and deep learning neural networks. She is also a strong advocate for diversity and ethical considerations in AI, and in 2017 she co-founded AI4ALL, an organization dedicated to increasing diversity and inclusion in AI.

FeiFeiLee

Yann LeCun and Convolutional Neural Networks (CNNs)

CNNs were pioneered by Yann LeCun in the late 1980s and early 1990s. He developed the LeNet-5 architecture, which was designed to recognize handwritten digits. This work laid the foundation for modern deep learning and computer vision applications. Yann LeCun and Geoffrey Hinton are often referred to as “godfathers of AI”.

Ian Goodfellow and Genertive Adversarial Networks (GANs)

In 2014 Ian Goodfellow developed Generative Adversarial Networks (GANs) which have been foundational in advancing AI’s ability to generate realistic images.

 

Reference

"Godfather of Artificial Intelligence" Geoffrey Hinton on the promise, risks of advanced AICBS, 60 Minutes.

Watson, ‘Jeopardy!’ championIBM

The History of AI: A Timeline of Artificial IntelligenceCoursera

The Quest for Artificial Intelligence: A History of Ideas and AchievementsNils J. Nilsson, Cambridge University Press, 2010.

Timeline of AIan interactive timeline of the history of AI

 


Creative Commons License Intro to AI lecture notes by Brian Bird, written in , are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Note: GPT-4 and GPT-4o were used to draft parts of these notes.