- Ethics of artificial intelligence
Treating AIs Ethically
There are many ethical problems associated with working to create intelligent creatures.
* AI rights: if an AI is comparable in intelligence to humans, then should it have comparable moral status?
* Would it be wrong to engineer robots that "want" to perform tasks unpleasant to humans?
* Could a computer simulate an animal or human brain in a way that the simulation should receive the sameanimal rights orhuman right s as the actual creature?
* Under what preconditions could such a simulation be allowed to happen at all?Robot rights
Robot rights is the
human rights corresponding torobot s. The corresponding definition is therefore "the basic rights and freedoms to which all robots are entitled, often held to include the right to life and liberty, freedom of thought and expression, and equality before the law." [The American Heritage Dictionary of the English Language, Fourth Edition]Legal rights of robots
With the emergence of advanced robots, there is a greater need of legal rights as well as legal responsibilities for robots. Specific and detailed laws than that will probably be necessary [ [http://www.rfreitas.com/Astro/LegalRightsOfRobots.htm The Legal Rights of Robots] .]
Necessity
The time where the rights of robots must be considered might not be very far away. Already in 2020, there might be a robot in every South Korean household, and very advanced robots besides. [ [http://www.the-scientist.com/2007/5/1/30/1/ The Scientist - A Robot Code of Ethics] By Glenn McGee ] Since science is only accelerating, robot rights is probably something that even many people today need to consider. However, even the most enthusiastic scientists admit that at least 50 years have to pass before any real artificial intelligence can be spoken about [ [http://www.yellowzeppelin.info/Robots/worried_9211.html New World Technologies - NWT - Should we be worried by the rise of robots?] ] . Until then, no real robot rights are probable to be of significance.
Creating AIs that Behave Ethically
* Would a
technological singularity be a good result or a bad one? If bad, what safeguards can be put in place, and how effective could any such safeguards be?A major influence in the AI ethics dialogue was
Isaac Asimov who, at the insistence of his editorJohn W. Campbell Jr. , proposed theThree Laws of Robotics to govern artificial intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. Ultimately, a reading of his work concludes that no set of fixed laws can sufficiently match the possible behavior of AI agents and human society. A criticism of Asimov's robot laws is that the installation of unalterable laws into a sentient consciousness would be a limitation offree will and therefore unethical. Consequently, Asimov's robot laws would be restricted to explicitly non-sentient machines, which possibly could not be made to reliably understand them under all possible circumstances.Robot Ethics in Fiction
The movie
The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computergame console s for the purpose of entertainment. The movieThe Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmostSpeciesism . The short story The Planck Dive suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in theEmergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The moviesBicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.Over time, debates have tended to focus less and less on "possibility" and more on "desirability", as emphasized in the "Cosmist" and "Terran" debates initiated by
Hugo de Garis andKevin Warwick . A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.ee also
*
Friendly artificial intelligence External links
* [http://www.shawnkilmer.com/?p=92 Research Paper: Philosophy of Consciousness and Ethics in Artificial Intelligence]
* [http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture]
* [http://news.bbc.co.uk/1/hi/sci/tech/1809769.stm BBC News: Games to take on a life of their own]
* [http://www.asimovlaws.com 3 Laws Unsafe Campaign - Asimov's Laws & I, Robot]
* [http://www.dasboot.org/thorisson.htm Who's Afraid of Robots?] , an article on humanity's fear of artificial intelligence.
* [http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf Governing Lethal Robot Behavior on the Battlefield]Notes
Wikimedia Foundation. 2010.