Artificial Intelligence.
Publié le 11/05/2013
Extrait du document
«
Work in AI has primarily focused on two broad areas: developing logic-based systems that perform common-sense and expert reasoning, and using cognitive andbiological models to simulate and explain the information-processing capabilities of the human brain.
In general, work in AI can be categorized within three research anddevelopment types: symbolic, connectionist, and evolutionary.
Each has characteristic strengths and weaknesses.
A Symbolic AI
Symbolic AI is based in logic.
It uses sequences of rules to tell the computer what to do next.
Expert systems consist of many so-called IF-THEN rules: IF this is thecase, THEN do that.
Since both sides of the rule can be defined in complex ways, rule-based programs can be very powerful.
The performance of a logic-based programneed not appear “logical,” as some rules may cause it to take apparently irrational actions.
“Illogical” AI programs are not used for practical problem-solving, but areuseful in modeling how humans think.
Symbolic programs are good at dealing with set problems, and at representing hierarchies (in grammar, for example, orplanning).
But they are inflexible: If part of the expected input data is missing or mistaken, they may give a bad answer, or no answer at all.
B Connectionist AI
Connectionism is inspired by the human brain.
It is closely related to computational neuroscience, which models actual brain cells and neural circuits.
Connectionist AIuses artificial neural networks made of many units working in parallel.
Each unit is connected to its neighbors by links that can raise or lower the likelihood that theneighbor unit will “fire” (excitatory and inhibitory connections, respectively).
Neural networks that are able to learn do so by changing the strengths of these links,depending on past experience.
These simple units are much less complex than real neurons.
Each can do only one thing, such as report a tiny vertical line at aparticular place in an image.
What matters is not what any individual unit is doing, but the overall activity pattern of the whole network.
Consequently, connectionist systems are more flexible than symbolic AI programs.
Even if the input data is faulty, the network may give the right answer.
They aretherefore good at pattern recognition, where the input patterns within a certain class need not be identical.
But connectionism is weak at doing logic, following actionsequences, or representing hierarchies of goals.
What symbolic AI does well, connectionism does badly, and vice versa.
Hybrid systems combine the two, switchingbetween them as appropriate.
And work on recurrent neural networks, where the output of one layer of units is fed back as input to some previous layer, aims toenable connectionist systems to deal with sequential action and hierarchy.
The emerging field of connectomics could help researchers decode the brain’s approach to information processing.
See Neurophysiology; Nervous System.
C Evolutionary AI
Evolutionary AI draws on biology.
Its programs make random changes in their own rules, and select the best daughter programs to breed the next generation.
Thismethod develops problem-solving programs, and can evolve the “brains” and “eyes” of robots.
A practical application of evolutionary AI would be a computer model ofthe long-term growth of a business in which the evolution of the business is set within a simulated marketplace.
Evolutionary AI is often used in modeling artificial life(commonly known as A-Life), a spin-off from AI.
One focus of study in artificial life is on self-organization, namely how order arises from something that is ordered to alesser degree.
Biological examples include the flocking patterns of birds and the development of embryos.
Technological examples include the flocking algorithms usedfor computer animation.
V PHILOSOPHICAL DEBATES OVER ARTIFICIAL INTELLIGENCE
People often ask if artificial intelligence is possible, but the question is ambiguous.
Certainly, AI programs can produce results that resemble human behavior.
Somethings that most people once assumed computers could never do are now possible due to AI research.
For example, AI programs can compose aesthetically appealingmusic, draw attractive pictures, and even play the piano “expressively.” Other things are more elusive, such as producing perfect translations of a wide range of texts;making fundamental, yet aesthetically acceptable, transformations of musical style; or producing robots that can interact meaningfully with their surroundings.
It iscontroversial whether these things are merely very difficult in practice, or impossible in principle.
The larger question of whether any program or robot could really be intelligent, no matter how humanlike its performance, involves highly controversial issues in thephilosophy of mind, including the importance of embodiment and the nature of intentionality and consciousness.
Some philosophers and AI researchers argue thatintelligence can arise only in bodily creatures sensing and acting in the real world.
If this is correct, then robotics is essential to the attempt to construct truly intelligentartifacts.
If not, then a mere AI program might be intelligent.
British mathematician and computer scientist Alan Turing proposed what is now called the Turing Test as a way of deciding whether a machine is intelligent.
He imagineda person and a computer hidden behind a screen, communicating by electronic means.
If we cannot tell which one is the human, we have no reason to deny that themachine is thinking.
That is, a purely behavioral test is adequate for identifying intelligence (and consciousness).
American philosopher John Searle has expressed a different view.
He admits that a program might produce replies identical to those of a person, and that aprogrammed robot might behave exactly like a human.
But he argues that a program cannot understand anything it says.
It is not actually saying or asserting anythingat all, but merely outputting meaningless symbols that it has manipulated according to purely formal rules—in other words, all syntax and no semantics.
Searle assertsthat human brains can ascribe meaning to symbols, thus deriving understanding, whereas metal and silicon cannot.
No consensus exists in either AI or philosophy as towhose theory, Turing’s or Searle’s, is right.
Whether an AI system could be conscious is an especially controversial topic.
The concept of consciousness itself is ill-understood, both scientifically and philosophically.Some would argue that any robot, no matter how superficially humanlike, would never possess the consciousness or sentience of a living being.
But others would arguethat a robot whose functions matched the relevant functions of the brain (whatever those may be) would inevitably be conscious.
The answer has moral implications: Ifan AI system were conscious, it would arguably be wrong to “kill” it, or even to use it as a “slave.” See also States of Consciousness.
VI THE FUTURE OF ARTIFICIAL INTELLIGENCE
Building intelligent systems—and ultimately, automating intelligence—remains a daunting task, and one that may take decades to fully realize.
AI research is currentlyfocused on addressing existing shortcomings, such as the ability of AI systems to converse in natural language and to perceive and respond to their environment.However, the search for AI has grown into a field with far-reaching applications, many of which are considered indispensable and are already taken for granted.
Nearlyall industrial, governmental, and consumer applications are likely to utilize AI capabilities in the future.
Microsoft ® Encarta ® 2009. © 1993-2008 Microsoft Corporation.
All rights reserved..
»
↓↓↓ APERÇU DU DOCUMENT ↓↓↓
Liens utiles
- Artificial intelligence
- ULYSSE ou l’intelligence. Gabriel Audisio (résumé)
- NAISSANCE DE L'INTELLIGENCE CHEZ L’ENFANT (La). (résumé et analyse) Jean Piaget
- intelligence - psychologie & psychanalyse.
- PSYCHOLOGIE DE L’INTELLIGENCE (LA), 1947. Jean Piaget (résumé)