Moravec's paradox

Moravec's paradox

Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes: "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."[1]

Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

"The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived.... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come."[2]

Marvin Minsky emphasizes that the most difficult human skills to reverse engineer are those that are unconscious. "In general, we're least aware of what our minds do best," he writes, and adds "we're more aware of simple processes that don't work well than of complex ones that work flawlessly."[3]

Contents

The biological basis of human skills

One possible explanation of the paradox, offered by Moravec, is based on evolution. All human skills are implemented biologically, using machinery designed by the process of natural selection. In the course of their evolution, natural selection has tended to preserve design improvements and optimizations. The older a skill is, the more time natural selection has had to improve the design. Abstract thought developed only very recently, and consequently, we should not expect its implementation to be particularly efficient.

As Moravec writes:

“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”[4]

A compact way to express this argument would be:

  • We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
  • The oldest human skills are largely unconscious and so appear to us to be effortless.
  • Therefore, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.

Some examples of skills that have been evolving for millions of years: recognizing a face, moving around in space, judging people’s motivations, catching a ball, recognizing a voice, setting appropriate goals, paying attention to things that are interesting; anything to do with perception, attention, visualization, motor skills, social skills and so on.

Some examples of skills that have appeared more recently: mathematics, engineering, human games, logic and much of what we call science. These are hard for us because they are not what our bodies and brains were primarily designed to do. These are skills and techniques that were acquired recently, in historical time, and have had at most a few thousand years to be refined, mostly by cultural evolution.[5]

Historical influence on artificial intelligence

In the early days of artificial intelligence research, leading researchers often predicted that they would be able to create thinking machines in just a few decades (see history of artificial intelligence). Their optimism stemmed in part from the fact that they had been successful at writing programs that used logic, solved algebra and geometry problems and played games like checkers and chess. Logic and algebra are difficult for people and are considered a sign of intelligence. They assumed that, having (almost) solved the "hard" problems, the "easy" problems of vision and commonsense reasoning would soon fall into place. They were wrong, and one reason is that these problems are not easy at all, but incredibly difficult. The fact that they had solved problems like logic and algebra was irrelevant, because these problems are extremely easy for machines to solve.[6]

Rodney Brooks explains that, according to early AI research, intelligence was "best characterized as the things that highly educated male scientists found challenging", such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. "The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence."[7]

This would lead Brooks to pursue a new direction in artificial intelligence and robotics research. He decided to build intelligent machines that had "No cognition. Just sensing and action. That is all I would build and completely leave out what traditionally was thought of as the intelligence of artificial intelligence."[7] This new direction, which he called "Nouvelle AI" was highly influential on robotics research and AI.[8]

See also

Notes

  1. ^ Moravec 1988, p. 15.
  2. ^ Pinker 2007, p. [page needed].
  3. ^ Minsky 1988, p. 29.
  4. ^ Moravec 1988, pp. 15–16
  5. ^ Even given that cultural evolution is faster than genetic evolution, the difference in development time between these two kinds of skills is five or six orders of magnitude, and (Moravec would argue) there hasn't been nearly enough time for us to have "mastered" the new skills.
  6. ^ These are not the only reasons that their predictions did not come true: see the problems
  7. ^ a b Brooks (2002), quoted in McCorduck (2004, p. 456)
  8. ^ McCorduck 2004, p. 456.

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Moravec — may refer to: Moravec (Žďár nad Sázavou District), a village in the Žďár nad Sázavou District of the Czech Republic Moravec (robot), a robot in the novel Ilium Moravec (surname), people with the surname Moravec Moravec corner detection algorithm… …   Wikipedia

  • Hans Moravec — Mind children redirects here. For use of the term by Frank Tipler, see The Physics of Immortality (book). Hans Moravec (born November 30, 1948 in Austria) is an adjunct faculty member at the Robotics Institute of Carnegie Mellon University. He is …   Wikipedia

  • History of artificial intelligence — The history of artificial intelligence begins in antiquity with myths, stories and rumors of artificial beings endowed with intelligence and consciousness by master craftsmen. In the middle of the 20th century, a handful of scientists began to… …   Wikipedia

  • What Computers Can't Do — Book cover of the 1979 paperback edition See also: Philosophy of artificial intelligence Hubert Dreyfus has been a critic of artificial intelligence research since the 1960s. In a series of papers and books, including Alchemy and AI (1965), What… …   Wikipedia

  • Embodied philosophy — Philosophers, cognitive scientists and artificial intelligence researchers who study embodied cognition and the embodied mind argue that the nature of the human mind is largely determined by the form of the human body that ideas, thoughts,… …   Wikipedia

  • Outline of artificial intelligence — The following outline is provided as an overview of and topical guide to artificial intelligence: Artificial intelligence (AI) – branch of computer science that deals with intelligent behavior, learning, and adaptation in machines. Research in AI …   Wikipedia

  • Philosophy of artificial intelligence — The philosophy of artificial intelligence considers the relationship between machines and thought and attempts to answer such question as: [Harvnb|Russell|Norvig|2003|p=947 define the philosophy of AI as consisting of the first two questions, and …   Wikipedia

  • Novikov self-consistency principle — The Novikov self consistency principle, also known as the Novikov self consistency conjecture, is a principle developed by Russian physicist Igor Dmitriyevich Novikov in the mid 1980s to solve the problem of paradoxes in time travel, which is… …   Wikipedia

  • Simulated reality — is the proposition that reality could be simulated perhaps by computer simulation to a degree indistinguishable from true reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation. This …   Wikipedia

  • Sentience Quotient — The Sentience Quotient concept was introduced by Robert A. Freitas Jr. in the late 1970s.Dr. Freitas, Robert A. Jr., [http://www.rfreitas.com/Astro/Xenopsychology.htm Xenopsychology] , Analog Science Fiction/Science Fact, Vol. 104, April 1984, pp …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”