- History of artificial intelligence
The history of artificial intelligence begins in
antiquity with myths, stories and rumors of artificial beings endowed with intelligence and consciousness by master craftsmen. In the middle of the 20th century, a handful of scientists began to explore a new approach to this ancient idea based on their discoveries inneurology , a new mathematical theory ofinformation , an understanding of control and stability calledcybernetic s and, above all, by the invention of thedigital computer , a machine based on the abstract essence of mathematical reasoning.The field of
artificial intelligence research was born at a conference on the campus ofDartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for many decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true. Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism ofSir James Lighthill and ongoing pressure from congress, the U.S. andBritish Government s stopped funding undirected research into artificial intelligence. Seven years later, theJapanese Government and American industry would provide AI with billions of dollars, but again the investors would be disappointed and by the late 80s the funding would dry up again. The cycle of boom and bust, ofAI winter s and summers, continues to the present day. Undaunted, there are those that make extraordinary predictions even now. [For example Harvtxt|Kurzweil|2005 argues that machines with human level intelligence will exist by 2029.]Despite the rise and fall of AI in the perceptions of venture capitalists and government bureaucrats, AI has made continuous advances in all areas regardless of the climate, overcoming unexpected obstacles, reorienting itself to the light of new discoveries and riding the crest of the wave of increasing computer power. Progress has been slower than predicted but progress has continued nonetheless. Artificial intelligence problems that had begun to seem impossible in 1970 have been solved and the solutions are now used in successful commercial products.
It remains to be seen when or if an AI system will be built with a human level of intelligence.
Alan Turing , in a famous 1950 paper, asked the question "can machines think?" and concluded: "We can only see a short distance ahead, but we can see plenty there that needs to be done."Harvnb|Turing|1950|p=460]Precursors
Harvtxt|McCorduck|2004 writes "
artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockworkautomaton s. [Harvnb|McCorduck|2004|pp=5-35]AI in myth, fiction and speculation
Mechanical men and artificial beings appear in
Greek myth s, such as the golden robots ofHephaestus and Pygmalion's Galatea. [Harvnb|McCorduck|2004|p=5,] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such asGeber 's "Takwin ",Paracelsus 'homunculus and Rabbi Judah Loew'sGolem . [Harvnb|McCorduck|2004|p=15-16,Harvnb|Buchanan|2005|p=50 (Judah Loew 'sGolem ),Harvnb|McCorduck|2004|p=13-14 (Paracelsus),Harvnb|O'Connor|1994 (Geber's "Takwin")] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as inMary Shelley 's "Frankenstein " orKarel Čapek 's "R.U.R. (Rossum's Universal Robots) ", [] and speculation, such asSamuel Butler 's "Darwin Among the Machines ". []Automatons
Realistic humanoid
automaton s were built by craftsman from every civilization, including Yan Shi, []Hero of Alexandria , []Al-Jazari [ [http://www.shef.ac.uk/marcoms/eview/articles58/robot.html A Thirteenth Century Programmable Robot] ] andWolfgang von Kempelen . [] The oldest knownautomaton s were the sacred statues ofancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it." [Quoted in Harvnb|McCorduck|2004|p=8. Harvnb|Crevier|1993|p=1 and Harvnb|McCorduck|2004|pp=6-9 discusses sacred statues.] [Other importantautomaton s were built byHaroun al-Rashid Harv|McCorduck|2004|p=10,Jacques de Vaucanson Harv|McCorduck|2004|p=16 andLeonardo Torres y Quevedo Harv|McCorduck|2004|p=59-62]Formal reasoning
In the 17th century,
Thomas Hobbes ,René Descartes andGottfried Leibniz explored the possibility that all rational thought could be made as systematic as algebra or geometry. []Hobbes famously wrote in "Leviathan": "reason is nothing but reckoning". []
Leibniz envisioned a universal language of reasoning (his "characteristica universalis ") which would reduce argumentation to calculation, so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): "Let us calculate"." [] These philosophers had begun to articulate thephysical symbol system hypothesis that would become the guiding faith of AI research.Computer science
:Main|history of computer hardware|history of computer science
Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. The first modern computers were the massive code breaking machines of the
Second World War (such as Z3,ENIAC and Colossus). [Harvnb|McCorduck|2004|p=61-62, 64-66, Harvnb|Russell|Norvig|2003|p=14-15]A key insight was the
Turing machine , a simple theoretical construct that captured the essence of abstract symbol manipulation. TheChurch-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. This would inspire a handful of scientists to begin discussing the possibility of thinking machines. [Harvnb|McCorduck|2004|pp=63-64,Harvnb|Crevier|1993|pp=22-24,Harvnb|Russell|Norvig|2003|p=8 and seeHarvnb|Turing|1936. Other important contributors to thetheory of computation includeJohn Von Neumann Harv|McCorduck|2004|p=76-80]The birth of artificial intelligence 1943−1956
"A note on the sections in this article". [The starting and ending dates of the sections in this article are adopted from Harvnb|Crevier|1993 and Harvnb|Russell|Norvig|2003|p=16−27. Themes, trends and projects are treated in the period that the most important work was done.]
In the 1940s and 50s, a handful of scientists from a variety fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of
artificial intelligence research was founded as an academic discipline in 1956.Cybernetics and early neural networks
The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s: the realization that the brain was an electrical network of neurons that fired in all-or-nothing pulses;
Norbert Weiner 'scybernetic s, which described electrical networks;Claude Shannon 'sinformation theory which described all-or-nothing signals; andAlan Turing 'stheory of computation . [Harvnb|McCorduck|2004|pp=51-57, 80-107,Harvnb|Crevier|1993|pp=27-32,Harvnb|Russell|Norvig|2003|pp=15, 940,Harvnb|Moravec|1988|p=3.]Robots built at this time, such as
W. Grey Walter 's turtles and theJohns Hopkins Beast , did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry. [Harvnb|McCorduck|2004|p=98, Harvnb|Crevier|1993|pp=27−28, Harvnb|Russell|Norvig|2003|pp=15, 940 and Harvnb|Moravec|1988|p=3]Walter Pitts and Warren McCulloch analyzed networks of idealized artificialneurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call aneural network . [Harvnb|McCorduck|2004|p=51-57, 88-94, Harvnb|Crevier|1993|p=30, Harvnb|Russell|Norvig|2003|p=15−16 and see also Harvnb|Pitts|McCullough|1943]One of the students inspired by Pitts and McCulloch was a young
Marvin Minsky , then a 24 year old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, theSNARC . [Harvnb|McCorduck|2004|p=102, Harvnb|Crevier|1993|pp=34−35 and Harvnb|Russell|Norvig|2003|p=17]
Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.Turing's test
In 1950
Alan Turing published a landmark paper in which he speculated about the possibility of creating machines with true intelligence. [Harvnb|McCorduck|2004|pp=70−72,Harvnb|Crevier|1993|p=22−25,Harvnb|Russell|Norvig|2003|pp=2−3 and 948,Harvnb|Haugeland|1985|pp=6−9.See also ] He noted that "intelligence" is difficult to define and devised his famousTuring Test . If a machine could carry on a conversation (over ateletype ) that was indistinguishable from a conversation with a human being, then the machine could be called "intelligent." This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least "plausible" and the paper answered all the most common objections to the proposition. [Harvtxt|Norvig|Russell|2003|p=948 claim that Turing answered all the major objections to AI that have been offered in the years since the paper appeared.] TheTuring Test was the first serious proposal in thephilosophy of artificial intelligence .ymbolic reasoning and the Logic Theorist
When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines. [Harvnb|McCorduck|2004|pp=137-170, Harvnb|Crevier|pp=44-47]
In 1955,
Allen Newell and (future Nobel Laureate)Herbert Simon created the "Logic Theorist " (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's "Principia Mathematica ", and find new and more elegant proofs for some. [Harvnb|McCorduck|2004|pp=123-125, Harvnb|Crevier|1993|pp=44−46 and Harvnb|Russell|Norvig|2003|p=17] Simon said that they had "solved the venerablemind/body problem , explaining how a system composed of matter can have the properties of mind." [Quoted in Harvnb|Crevier|1993|p=46 and Harvnb|Russell|Norvig|2003|p=17] (This was an early statement of the philosophical positionJohn Searle would later call "Section link|Strong AI|Chinese Room": that machines can contain minds just as human bodies do.) [Harvnb|Russell|Norvig|2003|p=947,952]Dartmouth Conference 1956: the birth of AI
The
Dartmouth Conference of 1956 [Harvnb|McCorduck|2004|pp=111-136,Harvnb|Crevier|1993|pp=49-51 and] was organized byMarvin Minsky , John McCarthy and two senior scientists:Claude Shannon and Nathan Rochester ofIBM . The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it" — a clear statement of the philosophical position of AI research. [See Harvnb|McCarthy|Minsky|Rochester|Shannon|1955. Also see Harvnb|Crevier|1993|p=48 where Crevier states " [the proposal] later became known as the 'physical symbol systems hypothesis'". Thephysical symbol system hypothesis was articulated and named by Newell and Simon in their paper on GPS. Harv|Newell|Simon|1963 It includes a more specific definition of a "machine" as an agent that manipulates symbols. See thephilosophy of artificial intelligence .] The participants includedRay Solomonoff ,Oliver Selfridge ,Trenchard More ,Arthur Samuel ,Allen Newell andHerbert Simon , all of whom would create important programs during the first decades of AI research. [Harvtxt|McCorduck|2004|p=129-130 discusses how the Dartmouth conference alumni dominated the first two decades of AI research, calling them the "invisible college".] At the conference Newell and Simon debuted the "Logic Theorist " and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field. ["I won't swear and I hadn't seen it before," McCarthy told Pamela McCorduck in 1979. Harv|McCorduck|2004|p=114 However, McCarthy also stated unequivocally "I came up with the term" in aCNET interview. Harv|Skilling|2006] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI. [Harvtxt|Crevier|1993|pp=49 writes "the conference is generally recognized as the official birthdate of the new science."]The golden years 1956−1974
The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply "astonishing": [Russell and Norvig write "it was astonishing whenever a computer did anything remotely clever." Harvnb|Russell|Norvig|2003|p=18] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all. [Harvnb|Crevier|1993|pp=52−107, Harvnb|Moravec|1988|p=9 and Harvnb|Russell|Norvig|2003|p=18−21] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years. [Harvnb|McCorduck|2004|p=218, Harvnb|Crevier|1993|pp=108−109 and Harvnb|Russell|Norvig|2003|p=21] Government agencies like ARPA poured money into the new field. [Harvnb|Crevier|1993|pp=52−107, Harvnb|Moravec|1988|p=9]
The work
There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:
; Reasoning as searchMany AI programs used the same basic
algorithm in the early years of AI research: to achieve some goal (like winning a game or proving a theorem) and they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze,backtracking whenever they reached a dead end. This paradigm was called "reasoning as search ". [Means-ends analysis, reasoning as search: Harvnb|McCorduck|2004|p=247-248. Harvnb|Russell|Norvig|2003|pp=59−61]The principal difficulty was that, for many problems, the number of possible paths through the "maze" was simply astronomical (this is called a "
combinatorial explosion "). Researchers would reduce the search space by usingheuristics or "rules of thumb" that would eliminate those paths that were unlikely to lead to a solution. [Heuristic: Harvnb|McCorduck|2004|p=246, Harvnb|Russell|Norvig|2003|pp=21−22]Newell and Simon tried to capture a general version of this algorithm in a program called the "
General Problem Solver ". [GPS: Harvnb|McCorduck|2004|p=245-250, Harvnb|Crevier|1993|p=GPS?, Harvnb|Russell|Norvig|2003|p=GPS?] Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra:Herbert Gelernter 's Geometry Theorem Prover (1958) and SAINT, written by Minsky's studentJames Slagle (1961). [Harvnb|Crevier|1993|pp=51−58,65−66 and Harvnb|Russell|Norvig|2003|pp=18−19] Other programs searched through goals and subgoals to plan actions, like theSTRIPS system developed atStanford to control the behavior of their robot Shakey. [Harvnb|McCorduck|2004|pp=268-271, Harvnb|Crevier|1993|pp=95−96, Harvnb|Moravec|1988|pp=14−15]; Natural languageAn important goal of AI research is to allow computers to communicate in natural languages like English. An early success was
Daniel Bobrow 's program STUDENT, which could solve high school algebra word problems. [Harvnb|McCorduck|2004|p=286, Harvnb|Crevier|1993|pp=76−79, Harvnb|Russell|Norvig|2003|p=19]A
semantic net represents concepts (e.g. "house","door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written byRoss Quillian [Harvnb|Crevier|1993|pp=79−83] and the most successful (and controversial) version wasRoger Schank 'sConceptual Dependency . [Harvnb|Crevier|1993|pp=164−172]Perhaps the most interesting English speaking computer program was
Joseph Weizenbaum 'sELIZA , the firstchatterbot . ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave acanned response or repeated back what was said to her, rephrasing her response with a few grammar rules. [Harvnb|McCorduck|2004|pp=291-296, Harvnb|Crevier|1993|pp=134−139]; Micro-worldsIn the late 60s,
Marvin Minsky andSeymour Papert of theMIT AI Laboratory proposed that AI research should focus on artificially simple situations known asMicro-Worlds . They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on the so-called "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface. [Harvnb|McCorduck|2004|pp=299-305, Harvnb|Crevier|1993|pp=83−102, Harvnb|Russell|Norvig|2003|p=19 and see also [http://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI06.html Micro-World AI] ]This paradigm led to innovative work in
machine vision byGerald Sussman (who led the team),Adolfo Guzman ,David Waltz (who invented "constraint propagation "), and especiallyPatrick Winston . At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program wasTerry Winograd 'sSHRDLU . It could communicate in ordinary English sentences, plan operations and execute them. [Harvnb|McCorduck|2004|pp=300-305, Harvnb|Crevier|1993|pp=84−102, Harvnb|Russell|Norvig|2003|p=19]The optimism
The first generation of AI researchers made these predictions about their work:
* 1958,H. A. Simon andAllen Newell : "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem." [Harvnb|Simon|Newell|1958|p=7−8 quoted in Harvnb|Crevier|1993|p=108. See also Harvnb|Russell|Norvig|2003|p=21]
* 1965,H. A. Simon : "machines will be capable, within twenty years, of doing any work a man can do." [Harvnb|Simon|1965|p=96 quoted in Harvnb|Crevier|1993|p=109]
* 1967,Marvin Minsky : "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." [Harvnb|Minsky|1967|p=2 quoted in Harvnb|Crevier|1993|p=109]
* 1970,Marvin Minsky (in "Life" Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being." [Minsky strongly believes he was misquoted. See Harvnb|McCorduck|2004|p=272-274, Harvnb|Crevier|1993|p=96 and Harvnb|Darrach|1970.]The money
In June 1963
MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known asDARPA ). The money was used to fundproject MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. ARPA continued to provide three million dollars a year until the 70s. [Harvnb|Crevier|1993|pp=64−65]
ARPA made similar grants to Newell and Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963). [Harvnb|Crevier|1993|p=94] Another important AI laboratory was established atEdinburgh University byDonald Michie in 1965. [Harvnb|Howe|1994] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years. [Harvnb|McCorduck|2004|p=131, Harvnb|Crevier|1993|p=51. McCorduck also notes that funding was mostly under the direction of alumni of theDartmouth conference of 1956.]The money was proffered with few strings attached:
J. C. R. Licklider , then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them. [Harvnb|Crevier|1993|p=65] This created a freewheeling atmosphere atMIT that gave birth to the hacker culture, [Harvnb|Crevier|1993|pp=68−71 and Harvnb|Turkle|1984] but this "hands off" approach would not last.The first AI winter 1974−1980
In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they face. Their tremendous optimism had raised expectations impossibly high, and when the results they had promised failed to materialize, funding for AI disappeared. [Harvnb|Crevier|1993|pp=100−144 and Harvnb|Russell|Norvig|2003|pp=21−22] At the same time, the field of
connectionism (orneural nets ) was shut down almost completely for 10 years byMarvin Minsky 's devastating criticism ofperceptrons .Harvnb|McCorduck|2004|pp=104−107,Harvnb|Crevier|1993|pp=102−105,Harvnb|Russell|Norvig|2003|p=22] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored inlogic programming ,commonsense reasoning and many other areas. [Harvnb|Crevier| 1993|pp=163−196]The problems
In the early seventies, the capabilities of AI programs were disturbingly limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys". [Harvnb|Crevier|1993|p=146] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day. [Harvnb|Russell|Norvig|2003|pp=20−21]
# Limited computer power: There was not enough memory or processing speed to accomplish anything truly useful. For example,Ross Quillian 's successful work on natural language was demonstrated with a vocabulary of only "twenty" words, because that was all that would fit in memory. [Harvnb|Crevier|1993|pp=146−148, see also Harvnb|Buchanan|2005|p=56: "Early programs were necessarily limited in scope by the size and speed of memory"]Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy. [Harvnb|Moravec|1976. McCarthy has always disagreed with Moravec, back to their early days together atSAIL . He states "I would say that 50 years ago, the machine capability was much too small, but by 30 years ago, machine capability wasn't the real problem." in aCNET interview. Harv|Skillings|2006]
#Intractability and thecombinatorial explosion . In 1972Richard Karp (building onStephen Cook 's 1971 theorem) showed there are many problems that can probably only be solved inexponential time (in the size of the inputs). To find optimal solutions to these problems required unimaginable amounts of computer time except when the problems were trivial. This almost certainly meant that many of the "toy" solutions used by AI would probably never scale up into useful systems. [Harvnb|Russell|Norvig|2003|pp=9,21−22 and Harvnb|Lighthill|1973]
#Commonsense knowledge and reasoning. Many important artificial intelligence applications like vision ornatural language required simply enormous amounts of information about the world: the program needed to have some idea of what it might be looking at or what it was talking about. This required that the program know most of the same things about the world that a child does. Researchers soon discovered that this was a truly "vast" amount of information. No one in 1970 could build a database so large and no one knew how a program might learn so much information. [Harvnb|McCorduck|2004|pp=300 & 421, Harvnb|Crevier|1993|pp=113−114, Harvnb|Moravec|1988|p=13, Harvnb|Lenat|1989 (Introduction) and Harvnb|Russell|Norvig|2003|p=21]
#Moravec's paradox : It would eventually dawn on many AI researchers working with vision androbotics that tasks like proving theorems or solving geometry problems were easy for computers to carry out, but supposedly "simple" tasks like recognizing a face or crossing a room without bumping into anything were extremely difficult. This helped explain why research in these areas had made so little progress by the middle 1970s. [Harvnb|McCorduck|2004|p=456, Harvnb|Moravec|1988|pp=15−16]
# The frame and qualification problems. AI researchers (like John McCarthy) who usedlogic discovered that they could not represent ordinary deductions that involved planning or default reasoning without making changes to the structure of logic itself. They developed new logics (likenon-monotonic logic s andmodal logic s) to try to solve the problems. [Harvnb|McCarthy|Hayes|1969, Harvnb|Crevier|1993|pp=117−119]The end of funding
The agencies that funded AI research (such as the
British government ,DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when theALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support. [Harvnb|McCorduck|2004|p=280-281, Harvnb|Crevier|1993|p=110, Harvnb|Russell|Norvig|2003|p=21 and Harvnb|NRC|1999 under "Success in Speech Recognition".] In 1973, theLighthill report on the state of AI research in England criticized the utter failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country. [Harvnb|Crevier|1993|p=117, Harvnb|Russell|Norvig|2003|p=22, Harvnb|Howe|1994 and see also Harvnb|Lighthill|1973.] (The report specifically mentioned thecombinatorial explosion problem as a reason for AI's failings.) [Harvnb|Russell|Norvig|2003|p=22, Harvnb|Lighthill|1973, John McCarthy wrote in response that "the combinatorial explosion problem has been recognized in AI from the beginning" in [http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html Review of Lighthill report] ]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program atCMU and canceled an annual grant of three million dollars. [Harvnb|Crevier|1993|pp=115−116 (on whom this account is based). Other views include Harvnb|McCorduck|2004|p=306-313 and Harvnb|NRC|1999 under "Success in Speech Recognition".] By 1974, funding for AI projects was hard to find.Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration." [Harvnb|Crevier|1993|p=115. Moravec explains, "Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."] However, there was another issue: since the passage ofMansfield Amendment in 1969,DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research." The creative, freewheeling exploration that had gone on in the 60s would not be funded byDARPA . The money was directed to specific projects with clear objectives, like autonomous tanks and battle management systems. [Harvnb|NRC|1999 under "Shift to Applied Research Increases Investment." While the autonomous tank was a failure, the battle management system proved to be enormously successful, saving billions in the firstGulf War , repaying the investment and justifying theDARPA 's pragmatic policy, at least as far asDARPA was concerned.]Critiques from across campus
Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a
formal system (such as a computer program) could never see the truth of certain statements, while a human being could. [ Harvnb|Crevier 1993|p=22, Harvnb|Russell|Norvig|2003|pp=949−950, Harvnb|Hofstadter|1980|pp=471−477 and see Harvnb|Lucas|1961]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal ofembodied ,instinct ive,unconscious "know how". ["Know-how" is Dreyfus' term. (Dreyfus makes a distinction between "knowing how" and "knowing that", a modern version ofHeidegger 's distinction ofready-to-hand andpresent-at-hand .) Harv|Dreyfus|Dreyfus|1986] [Dreyfus' critique of AI: Harvnb|McCorduck|2004|pp=211−239, Harvnb|Crevier|1993|pp=120−132, Harvnb|Russell|Norvig|2003|pp=950−952 and see Harvnb|Dreyfus|1972]John Searle 'sChinese Room argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "intentionality "). If the symbols have no meaning for the machine, Searle argued, then the machine can never be truly intelligent. [Harvnb|McCorduck|2004|pp=443−445, Harvnb|Crevier|1993|pp=269−271, Harvnb|Russell|Norvig|2004|pp=958−960 and see Harvnb|Searle|1980]These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It wasn't clear what difference "know how" or "
intensionality " made to an actual program. Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored." [Quoted in Harvnb|Crevier|1993|p=143] Dreyfus, who taught atMIT , was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me." [Quoted in Harvnb|Crevier|1993|p=122]Joseph Weizenbaum , the author ofELIZA , felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way to treat a human being." ["I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being."Joseph Weizenbaum , quoted in Harvnb|Crevier|1993|p=123.]Weizenbaum began to have serious ethical doubts about AI when
Kenneth Colby wrote DOCTOR, achatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. Eventually Weizenbaum would publish a thoughtful moral critique of AI. [Harvnb|McCorduck|2004|pp=356−373, Harvnb|Crevier|1993|pp=132−144, Harvnb|Russell|Norvig|2003|p=961 and see Harvnb|Weizenbaum|1976]Perceptrons and the dark age of connectionism
A
perceptron was a form ofneural network introduced in 1958 byFrank Rosenblatt , who had been a schoolmate ofMarvin Minsky at theBronx High School of Science . Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert's 1969 book "Perceptrons ". They showed that there were severe limitations to what perceptrons could do and thatFrank Rosenblatt 's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done inconnectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.The neats: logic, Prolog and expert systems
Logic was introduced into AI research as early as 1958, by John McCarthy in his
Advice Taker proposal. [] In 1963,J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution andunification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems. [Harvnb|McCorduck|2004|p=51, Harvnb|Crevier|1993|pp=190−192] A more fruitful approach to logic was developed in the 70s byRobert Kowalski at theUniversity of Edinburgh , and soon this led to the collaboration with French researchersAlain Colmerauer andPhillipe Roussel who created the successful logic programming languageProlog . [Harvnb|Crevier|1993|pp=193−196] Prolog uses a subset of logic (Horn clause s, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation forEdward Feigenbaum 'sexpert systems and the continuing work byAlan Newell andHerbert Simon that would lead to Soar and their unified theories of cognition. [Harvnb|Crevier|1993|pp=145−149,258−63]Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason,
Eleanor Rosch ,Amos Tversky ,Daniel Kahneman and others provided proof. [] McCarthy responded that what people do is irrelevant and pointed out that we don't need machines that think as people do, we need machines that can solve problems that people normally solve by thinking. [An early example of McCathy's position was in the journal Science where he said "This is AI, so we don't care if it's psychologically real" (see [http://books.google.com/books?id=PEkqAAAAMAAJ&q=%22we+don't+care+if+it's+psychologically+real%22&dq=%22we+don't+care+if+it's+psychologically+real%22&output=html&pgis=1 see Science at Google Books] ), and he recently reiterated his position at theAI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (see [http://www.engagingexperience.com/2006/07/ai50_ai_past_pr.html McCarthy's presentation at AI@50] )]The scruffies: frames and scripts
Among the critics of McCarthy's approach were his colleagues across the country at
MIT .Marvin Minsky ,Seymour Papert andRoger Schank were trying to solve problems like "story understanding" and "object recognition" that "required" a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic.Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise." [Harvnb|Crevier|1993|pp=175] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon. [Neat vs. scruffy: Harvnb|McCorduck|2004|pp=421-424 (who picks up the state of the debate in 1984). Harvnb|Crevier|1993|pp=168 (who documents Schank's original use of the term). Another aspect of the conflict was called "the procedural/declarative distinction" but did not prove to be influential in later AI research.]In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical," but these structured sets of assumptions are part of the "context" of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English. [Harvnb|McCorduck|2004|pp=305-306, Harvnb|Crevier|1993|pp=170−173, 246 and Harvnb|Russell|Norvig|2003|p=24. Minsky's frame paper: Harvnb|Minsky|1974.] Many years later
object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.Boom 1980–1987
In the 1980s a form of AI program called "
expert system s" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with itsfifth generation computer project. Another encouraging event in the early 1980s was the revival ofconnectionism in the work ofJohn Hopfield andDavid Rumelhart . Once again, AI had achieved success.The rise of expert systems
An
expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed byEdward Feigenbaum and his students.Dendral , begun in 1965, identified compounds from spectrometer readings.MYCIN , developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach. [Harvnb|McCorduck|2004|pp=327-335 (Dendral ), Harvnb|Crevier|1993|pp=148−159, Harvnb|Russell|Norvig|2003|pp=22−23]Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the
commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be "useful": something that AI had not been able to achieve up to this point. [harvnb|Crevier|1993|pp=158−159 and Harvnb|Russell|Norvig|2003|p=23−24]In 1980, an expert system called
XCON was completed atCMU for theDigital Equipment Corporation . It was an enormous success: it was saving the company 40 million dollars annually by 1986. [Harvnb|Crevier|1993|p=198] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies likeSymbolics andLisp Machines and software companies such as IntelliCorp and Aion. [Harvnb|McCorduck|2004|pp=434-435, Harvnb|Crevier|1993|pp=161−162,197−203 and Harvnb|Russell|Norvig|2003|p=24]The knowledge revolution
The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways," [Harvnb|McCorduck|2004|p=299] writes Pamela McCorduck. " [T] he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay". [{Harvnb|McCorduck|2004|pp=421]
Knowledge based system s andknowledge engineering became a major focus of AI research in the 1980s. [Knowledge revolution: Harvnb|McCorduck|2004|pp=266-276, 298-300, 314, 421, Harvnb|Russell|Norvig|pp=22-23]The 1980s also saw the birth of
Cyc , the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows.Douglas Lenat , who started and led the project, argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades. [Cyc: Harvnb|McCorduck|2004|p=489, Harvnb|Crevier|1993|pp=239−243, Harvnb|Russell|Norvig|2003|p=363−365 and Harvnb|Lenat|Guha|1989]The money returns: the fifth generation project
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million dollars for the
Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. [Harvnb|McCorduck|2004|pp=436-441, Harvnb|Crevier|1993|pp=211, Harvnb|Russell|Norvig|2003|p=24 and see also Harvnb|Feigenbaum|McCorduck|1983] Much to the chagrin of scruffies, they choseProlog as the primary computer language for the project. [harvnb|Crevier|1993|pp=195]Other countries responded with new programs of their own: England began the ₤350 million
Alvey project and a consortium of American companies formed theMicroelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology. [harvnb|Crevier|1993|pp=240.]DARPA responded as well, founding theStrategic Computing Initiative and tripling its investment in AI between 1984 and 1988. [Harvnb|McCorduck|2004|pp=426-432, Harvnb|NRC|1999 under "Shift to Applied Research Increases Investment"]The revival of connectionism
In 1982, physicist
John Hopfield was able to prove that a form of neural network (now called a "Hopfield net ") could learn and process information in a completely new way. Around the same time,David Rumelhart popularized a new method for training neural networks called "backpropagation " (discovered years earlier byPaul Werbos ). These two discoveries revived the field ofconnectionism which had been largely abandoned since 1970. [harvnb|Crevier|1993|pp=214−215.] Harvnb|Russell|Norvig|2003|p=25]The new field was unified and inspired by the appearance of "Parallel Distributed Processing" in 1986—a two volume collection of papers edited by Rumelhart and psychologist
James McClelland . Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs likeoptical character recognition andspeech recognition . [harvnb|Crevier|1993|pp=215−216.]Bust: the second AI winter 1987−1993
The business community's fascination with AI rose and fell in the 80s in the classic pattern of an
economic bubble . The collapse was in the "perception" of AI by government agencies and investors — the field continued to make advances despite the criticism.Rodney Brooks andHans Moravec , researchers from the related field ofrobotics , argued for an entirely new approach to artificial intelligence.AI winter
The term "
AI winter " was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow. [Harvnb|Crevier|1993|pp=203.AI winter was first used as the title of a seminar on the subject for theAssociation for the Advancement of Artificial Intelligence .]The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and
IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensiveLisp machines made bySymbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight. [Harvnb|McCorduck|2004|p=435, Harvnb|Crevier|1993|pp=209−210]Eventually the earliest successful expert systems, such as
XCON , proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as thequalification problem ) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts. [Harvnb|McCorduck|2004|p=435 (who cites institutional reasons for their ultimate failure), Harvnb|Crevier|1993|pp=204−208 (who cites the difficulty of truth maintenance, i.e., learning and updating), Harvnb|Lenat|Guha|1989|loc=Introduction (who emphasizes the brittleness and the inability to handle excessive qualification.)]In the late 80s, the new management of the
Strategic Computing Initiative cut funding to AI "deeply and brutally" [Harvnb|McCorduck|2004|pp=430-431] in favor of other projects that seemed more likely to produce immediate results.By 1991, the impressive list of goals penned in 1981 for Japan's Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" had not been met by 2008.Harvnb|McCorduck|2004|p=441, Harvnb|Crevier|1993|p=212. McCorduck writes "Two and a half decades later, we can see that the Japanese didn't quite meet all of those ambitious goals."] As with other AI projects, expectations had run much higher than what was actually possible.
The importance of having a body: Nouvelle AI and embodied reason
In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics. [Harvnb|McCorduck|2004|pp=454-462] They believed that, to show real intelligence, a machine needs to have a "body" — it needs to perceive, move, survive and deal with the world. They argued that these
sensorimotor skills are essential to higher level skills likecommonsense reasoning and that abstract reasoning was actually the "least" interesting or important human skill (seeMoravec's paradox ). They advocated building intelligence "from the bottom up." [Harvtxt|Moravec|1988|p=20 writes: "I am confident that this bottom-up route to artificial intelligence will one date meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphoricalgolden spike is driven uniting the two efforts."]The approach revived ideas from
cybernetic s andcontrol theory that had been unpopular since the sixties. Another precursor was David Marr, who had come toMIT in the late 70s from a successful background in neurology to lead the group studying vision. He rejected all symbolic approaches ("both" McCarthy's logic andMinsky 's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. Marr's work would be cut short by leukemia in 1980. [Harvnb|Crevier|1993|pp=183−190.]In a 1990 paper [http://people.csail.mit.edu/brooks/papers/elephants.pdf Elephants Don't Play Chess] , robotics researcher
Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough." [Harvnb|Brooks 1990|p=3] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called theembodied mind thesis. [See, for example, Harvnb|Lakoff|Turner|1999]AI 1993−present
The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focussed on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence." [Harvtxt|McCorduck|2004|p=424 discusses the fragmentation and the abandonment of AI's original goals.] AI was both more cautious and more successful than it had ever been.
Milestones and Moore's Law
On 11 May 1997, Deep Blue became the first computer Chess-playing system to beat a reigning world Chess champion,
Gary Kasparov . [Harvnb|McCorduck|2004|pp=480-483] In 2005, a Stanford robot won theDARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. After many years of effort, such milestones were finally achieved. These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today. [Harvnb|Kurzweil|2005|p=274 writes that the improvement in computer chess, "according to common wisdom, is governed only by the brute force expansion of computer hardware."] In fact, Deep Blue's computer was 10 million times faster than theFerranti Mark I thatChristopher Strachey taught to play chess in 1951. [Cycle time ofFerranti Mark I was 1.2 milliseconds, which is arguably equivalent to about 833flops . Deep Blue ran at 11.38gigaflops (and this does not even take into account Deep Blue's special-purpose hardware for chess). "Very" approximately, these differ by a factor of 10^7.] Thanks toMoore's law , the fundamental problem of "raw computer power" was slowly being overcome.Intelligent agents
A new paradigm called "
intelligent agent s" became widely accepted during the 90s. [Harvnb|McCorduck|2004|pp=471-478, Harvnb|Russell|Norvig|2003|p=55, where they write: "The whole-agent view is now widely accepted in the field". Theintelligent agent paradigm is discussed in major AI textbooks, such as: Harvnb|Russell|Norvig|2003|pp=32−58, 968−972, Harvnb|Poole|Mackworth|Goebel|1998|pp=7−21, Harvnb|Luger|Stubblefield|2004|pp=235−240] Although earlier researchers had proposed modular "divide and conquer" approaches to AI, [For example, bothJohn Doyle Harv|Doyle|1983 andMarvin Minsky 's popular classic "The Society of Mind " Harv|Minsky|1986 used the word "agent". Other "modular" proposals included Rodney Brook'ssubsumption architecture ,object-oriented programming and others.] theintelligent agent did not reach its modern form untilJudea Pearl ,Alan Newell and others brought concepts fromdecision theory andeconomics into the study of AI.Harvnb|Russell|Norvig|2003|pp=27, 55] When the economist's definition of a rational agent was married tocomputer science 's definition of an object or module, theintelligent agent paradigm was complete.An
intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents would be rational, thinking human beings. The intelligent agent paradigm defines AI research "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence. [This is how the most widely accepted textbooks of the 21st century define artificial intelligence. See Harvnb|Russell|Norvig|2003|p=32 and Harvnb|Poole|Mackworth|Goebel|1998|p=1]The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like
economics andcontrol theory . It was hoped that a completeagent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interactingintelligent agents . [Harvnb|McCorduck|2004|p=478]Victory of the neats
AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past. [Harvnb|McCorduck|2004|p=486-487, Harvnb|Russell|Norvig|2003|p=25-26] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like
mathematics ,economics oroperations research . The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline. Harvtxt|Russell|Norvig|2003 describe this as nothing less than a "revolution" and "the victory of the neats."Harvnb|Russell|Norvig|2003|p=25−26] [Harvtxt|McCorduck|2004|p=487: "As I write, AI enjoys a Neat hegemony."]Judea Pearl 's highly influential 1988 book [Harvnb|Pearl|1988] broughtprobability anddecision theory into AI. Among the many new tools in use wereBayesian networks ,hidden Markov models ,information theory ,stochastic modeling and classicaloptimization . Precise mathematical descriptions were also developed for "computational intelligence " paradigms likeneural networks andevolutionary algorithm s.AI behind the scenes
Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems and their solutions proved to be useful throughout the technology industry, [Harvnb|NRC|1999 under "Artificial Intelligence in the 90s", and Harvnb|Kurzweil|2005|p=264] such as
data mining ,
industrial robotics,logistics , [Harvnb|Russell|Norvig|2003|p=28]speech recognition , [For the new state of the art in AI based speech recognition, see [http://www.economist.com/science/tq/displaystory.cfm?story_id=9249338 Are You Talking to Me?] ] banking software,"AI-inspired systems were already integral to many everyday technologies such as internet search engines, bank software for processing transactions and in medical diagnosis."Nick Bostrom , [http://www.cnn.com/2006/TECH/science/07/24/ai.bostrom/ AI set to exceed human brain power] CNN.com (July 26, 2006)] medical diagnosisandGoogle 's search engine. [For the use of AI at Google, see [http://news.com.com/Googles+man+behind+the+curtain/2008-1024_3-5208228.html Google's man behind the curtain] , [http://news.com.com/Google+backs+character-recognition+research/2100-1032_3-6175136.html Google backs character recognition] and [http://news.com.com/Spying+an+intelligent+search+engine/2100-1032_3-6107048.html Spying an intelligent search engine] .]The field of AI receives little or no credit for these successes. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science. [Harvnb|McCorduck|2004|p=423, Harvnb|Kurzweil|2005|p=265, Harvnb|Hofstadter|1979|p=601]
Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." [ [http://www.cnn.com/2006/TECH/science/07/24/ai.bostrom/ AI set to exceed human brain power] CNN.com (July 26, 2006)]Many researchers in AI today deliberately call their work by other names, such as
informatics ,knowledge-based systems ,cognitive system s orcomputational intelligence . In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of theAI Winter continue to haunt AI research, as the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers." [ cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=http://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work= |publisher=The New York Times |date=2005-10-14 |accessdate=2007-07-30 ] [Alex Castro (2007) [http://www.economist.com/science/tq/displaystory.cfm?story_id=9249338 Are you talking to me?] The Economist Technology Quarterly (June 7, 2007)] [Patty Tascarella, [http://www.bizjournals.com/pittsburgh/stories/2006/08/14/focus3.html?b=1155528000%5E1329573 Robotics firms find fundraising struggle, with venture capital shy] . Pittsburgh Business Times (August 11, 2006)]Where is HAL 9000?
In 1968,
Arthur C. Clark andStanley Kubrick had imagined that by the year , a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created,HAL-9000 , was based on hard science: many leading AI researchers also believed that such a machine would exist by the year 2001. [Harvnb|Crevier|1993|pp=108−109]Marvin Minsky asks "So the question is why didn't we get HAL in 2001?" [He goes on to say: "The answer is, I believe we could have ... I once went to an international conference on neural net [s] . There were 40 thousand registrants ... but ... if you had an international conference, for example, on using multiple representations for common sense reasoning, I've only been able to find 6 or 7 people in the whole world."Marvin Minsky , in [http://technetcast.ddj.com/tnc_play_stream.html?stream_id=526 It's 2001] ] Minsky believes that the answer is that the central problems, likecommonsense reasoning , were being neglected, while most researchers pursued things like commercial applications ofneural nets orgenetic algorithms . John McCarthy, on the other hand, still blames thequalification problem . [See [http://www.engagingexperience.com/2006/07/ai50_ai_past_pr.html McCarthy's presentation at AI@50] ] ForRay Kurzweil , the issue is computer power and, usingMoore's Law , he predicts that machines with human-level intelligence will appear by 2029. [Harvnb|Kurzweil|2005]Jeff Hawkins argues thatneural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems. [Harvnb|Hawkins|Blakeslee|2004] There are many other explanations and for each there is a corresponding research program underway.Alan Turing 's quote from 1950 still applies in the 21st century: "We can only see a short distance ahead, but we can see that there is much to be done.Notes
References
*
*
*
*
*
*
*
*
*
*.
*
*
*
*
*
*
*
*
*
*
*
*
*.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
* Citation | last1 =Simon | first1 = H. A. | author-link=Herbert Simon | last2=Newell | first2=Allen | author2-link=Allen Newell | year = 1958
title = Heuristic Problem Solving: The Next Advance in Operations Research
journal =Operations Research | volume=6
*
* Citation | last = Skillings | first = Jonathan | year = 2006
title = Newsmaker: Getting machines to think like us
url = http://news.cnet.com/Getting-machines-to-think-like-us---page-2/2008-11394_3-6090207-2.html?tag=st.next | access-date=October 08 2008
* Citation | last=Alan | first=Turing | author-link=Alan Turing | year=1936
title=On Computable Numbers, with an Application to the Entscheidungsproblem
journal=Proceedings of the London Mathematical Society | series=2 | issue = 42 | date=1936-37 | pages= 230–265
url=http://www.abelard.org/turpap2/tp2-ie.asp | access-date=October 08 2008
*
*
Wikimedia Foundation. 2010.