How does Artificial Intelligence work? - Can machines think?

How does Artificial Intelligence work? - Can machines think?

Can machines think? — Alan Turing, 1950

Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?”

Turing’s paper “Computing Machinery and Intelligence” (1950), and it’s subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is. So what makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive precepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI:

  • Thinking humanly

  • Thinking rationally

  • Acting humanly

  • Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”

While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence.

AI is a computer system able to perform tasks that ordinarily require human intelligence… Many of these artificial intelligence systems are powered by machine learning, some of them are powered by deep learning and some of them are powered by very boring things like rules.”

History of AI

Intelligent robots and artificial beings first appeared in the ancient Greek myths of Antiquity. Aristotle’s development of the syllogism and its use of deductive reasoning was a key moment in mankind’s quest to understand its own intelligence. While the roots are long and deep, the history of artificial intelligence as we think of it today spans less than a century. The following is a quick look at some of the most important events in AI.

1942

Isaac Asimov wrote the “Three Laws of Robotics.”

1943

Warren McCullough and Walter Pitts publish “A Logical Calculus of Ideas Immanent in Nervous Activity.” The paper proposed the first mathematical model for building a neural network.

1949

In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they’re used. Hebbian learning continues to be an important model in AI.

1950

Alan Turing publishes “Computing Machinery and Intelligence”, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent.

Claude Shannon publishes the paper “Programming a Computer for Playing Chess.”

1951

Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.

1954

The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.

1956

Arthur Samuel develops a self-learning program to play checkers.

The phrase artificial intelligence is coined at the “Dartmouth Summer Research Project on Artificial Intelligence.” Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today.

Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program.

1958

John McCarthy develops the AI programming language Lisp and publishes the paper “Programs with Common Sense.” The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.

1959

Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving.

Herbert Gelernter develops the Geometry Theorem Prover program.

Arthur Samuel coins the term machine learning while at IBM.

John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.

1963

John McCarthy starts the AI Lab at Stanford.

1966

The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.

1969

The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.

1972

The logic programming language PROLOG is created.

1973

The “Lighthill Report,” detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects.

1974-1980

Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year’s “Lighthill Report,” artificial intelligence funding dries up and research stalls. This period is known as the “First AI Winter.”

1980

Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first “AI Winter.”

1982

Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.

1983

In response to Japan’s FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence.

1985

Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.

1987-1993

As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the “Second AI Winter.” During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.

1991

U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.

1992

Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.

1993

DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.

1997

IBM’s Deep Blue beats world chess champion Gary Kasparov

2005

STANLEY, a self-driving car, wins the DARPA Grand Challenge.

The U.S. military begins investing in autonomous robots like Boston Dynamic’s “Big Dog” and iRobot’s “PackBot.”

2008

Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

2011

IBM’s Watson trounces the competition on Jeopardy.

2012

Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in the breakthrough era for neural networks and deep learning funding.

2014

Google makes the first self-driving car to pass a state driving test.

2016

Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.

2020

Baidu releases its LinearFold AI algorithm to scientific and medical teams working to develop a vaccine during the early stages of the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods.

Can machines think? — Alan Turing, 1950

Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?”

Turing’s paper “Computing Machinery and Intelligence” (1950), and it’s subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is. So what makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive precepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI:

  • Thinking humanly

  • Thinking rationally

  • Acting humanly

  • Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”

While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence.

AI is a computer system able to perform tasks that ordinarily require human intelligence… Many of these artificial intelligence systems are powered by machine learning, some of them are powered by deep learning and some of them are powered by very boring things like rules.”

History of AI

Intelligent robots and artificial beings first appeared in the ancient Greek myths of Antiquity. Aristotle’s development of the syllogism and its use of deductive reasoning was a key moment in mankind’s quest to understand its own intelligence. While the roots are long and deep, the history of artificial intelligence as we think of it today spans less than a century. The following is a quick look at some of the most important events in AI.

1942

Isaac Asimov wrote the “Three Laws of Robotics.”

1943

Warren McCullough and Walter Pitts publish “A Logical Calculus of Ideas Immanent in Nervous Activity.” The paper proposed the first mathematical model for building a neural network.

1949

In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they’re used. Hebbian learning continues to be an important model in AI.

1950

Alan Turing publishes “Computing Machinery and Intelligence”, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent.

Claude Shannon publishes the paper “Programming a Computer for Playing Chess.”

1951

Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.

1954

The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.

1956

Arthur Samuel develops a self-learning program to play checkers.

The phrase artificial intelligence is coined at the “Dartmouth Summer Research Project on Artificial Intelligence.” Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today.

Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program.

1958

John McCarthy develops the AI programming language Lisp and publishes the paper “Programs with Common Sense.” The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.

1959

Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving.

Herbert Gelernter develops the Geometry Theorem Prover program.

Arthur Samuel coins the term machine learning while at IBM.

John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.

1963

John McCarthy starts the AI Lab at Stanford.

1966

The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.

1969

The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.

1972

The logic programming language PROLOG is created.

1973

The “Lighthill Report,” detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects.

1974-1980

Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year’s “Lighthill Report,” artificial intelligence funding dries up and research stalls. This period is known as the “First AI Winter.”

1980

Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first “AI Winter.”

1982

Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.

1983

In response to Japan’s FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence.

1985

Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.

1987-1993

As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the “Second AI Winter.” During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.

1991

U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.

1992

Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.

1993

DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.

1997

IBM’s Deep Blue beats world chess champion Gary Kasparov

2005

STANLEY, a self-driving car, wins the DARPA Grand Challenge.

The U.S. military begins investing in autonomous robots like Boston Dynamic’s “Big Dog” and iRobot’s “PackBot.”

2008

Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

2011

IBM’s Watson trounces the competition on Jeopardy.

2012

Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in the breakthrough era for neural networks and deep learning funding.

2014

Google makes the first self-driving car to pass a state driving test.

2016

Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.

2020

Baidu releases its LinearFold AI algorithm to scientific and medical teams working to develop a vaccine during the early stages of the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods.

Mindbank AI

Let's Go Beyond!

1:13:09 PM

Mindbank AI

Let's Go Beyond!

1:13:09 PM