Dossier : de l'IA faible à l'IA forte, par Jean-Claude Baquiast et Christohe Jacquemin 9 juillet 2008 par Jean-Paul Baquiast et Christophe Jacquemin Dossier L'intelligence artificielle (IA). De l'IA faible à l'IA forte L’Intelligence artificielle (dite ici IA) a connu des développements rapides, principalement aux Etats-Unis, dans les années 1960/1970, en corrélation avec l’apparition des premiers ordinateurs scientifiques. ues. On voit par ailleurs aujourd’hui se développer une IA qui vise à reproduire le plus grand nombre possible des fonctions et performances des cerveaux animaux et humains. En pratique, ces IA fortes sont associés à des robots, à qui elles confèrent des propriétés d’autonomie de plus en plus marquées. Proposons notre définition de l’IA : nous dirons qu’elle vise à simuler sur des ordinateurs et des réseaux électroniques, par l’intermédiaire de programmes informatiques, un certain nombre des comportements cognitifs, ou façons de penser, des cerveaux animaux et humains. C’est d’ailleurs ce qui est en train de se passer avec l’IA. 1. Les systèmes experts
Outline of artificial intelligence The following outline is provided as an overview of and topical guide to artificial intelligence: Artificial intelligence (AI) – branch of computer science that deals with intelligent behavior, learning, and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Branches of artificial intelligence Some applications of artificial intelligence Philosophy of artificial intelligence Philosophy of artificial intelligence Artificial intelligence and the future Strong AI – hypothetical artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that could successfully perform any intellectual task that a human being can. History of artificial intelligence Main article: History of artificial intelligence Artificial intelligence in fiction Main article: Artificial intelligence in fiction Psychology and AI Concepts in artificial intelligence 1970s
Knowledge engineering Knowledge engineering (KE) was defined in 1983 by Edward Feigenbaum, and Pamela McCorduck as follows: KE is an engineering discipline that involves integrating knowledge into computer systems in order to solve complex problems normally requiring a high level of human expertise. It is used in many computer science domains such as artificial intelligence, including databases, data mining, bioinformatics, expert systems, decision support systems and geographic information systems. Various activities of KE specific for the development of a knowledge-based system: Assessment of the problemDevelopment of a knowledge-based system shell/structureAcquisition and structuring of the related information, knowledge and specific preferences (IPK model)Implementation of the structured knowledge into knowledge basesTesting and validation of the inserted knowledgeIntegration and maintenance of the systemRevision and evaluation of the system. Knowledge engineering principles Bibliography
Some Thoughts On The Future Of Siri [Opinion We’ve seen the first rash of iPhone 4 reviews coming in, and they all agree on one thing: Siri is very impressive. It works because it does several things all at once. It understands what you’re saying, irrespective of your accent, and without a lot of initial training. And it understands what you mean, because it has the built-in smarts to know that if you say “Tell my wife I’m running late,” you mean “Send a text message to this particular contact with text that says I’m running late.” But this is just the start for Siri (which Apple’s acknowledged by calling it a beta). First, let’s think about new features it might support on the iPhone 4S. So the first, most immediate, improvements we can expect for Siri are a broadening of the databases it knows about and can query. Then there’s the possibility of opening Siri up to other applications on the device. How about in games? Finally, expansion beyond the iPhone 4S. I suspect yes, but for different purposes. Over to you. Related
Artificial consciousness Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics whose aim is to "define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995). Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation. Artificial consciousness can be viewed as an extension to artificial intelligence, assuming that the notion of intelligence in its commonly used sense is too narrow to include all aspects of consciousness. Philosophical views of artificial consciousness As there are many designations of consciousness, there are many potential types of AC. 61. Awareness Learning
Redwood Center for Theoretical Neuroscience Artificial intelligence AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field's long-term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can be made to simulate it History
AI effect The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence. Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, 'that's not thinking'." AI researcher Rodney Brooks complains "Every time we figure out a piece of it, it stops being magical; we say, Oh, that's just a computation. AI is whatever hasn't been done yet As soon as AI successfully solves a problem, the problem is no longer a part of AI. Douglas Hofstadter expresses the AI effect concisely by quoting Tesler's Theorem: "AI is whatever hasn't been done yet When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. AI applications become mainstream
Developing an Outline Summary: This resource describes why outlines are useful, what types of outlines exist, suggestions for developing effective outlines, and how outlines can be used as an invention strategy for writing. Contributors:Elyssa Tardiff, Allen BrizeeLast Edited: 2013-03-01 09:20:56 Ideally, you should follow the four suggestions presented here to create an effective outline. When creating a topic outline, follow these two rules for capitalization: For first-level heads, present the information using all upper-case letters; and for secondary and tertiary items, use upper and lower-case letters. Parallelism—How do I accomplish this? Each heading and subheading should preserve parallel structure. ("Choose" and "Prepare" are both verbs. Coordination—How do I accomplish this? All the information contained in Heading 1 should have the same significance as the information contained in Heading 2. (Campus and Web sites visits are equally significant. Subordination—How do I accomplish this?
Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz An extended conversation with the legendary linguist Graham Gordon Ramsay If one were to rank a list of civilization's greatest and most elusive intellectual challenges, the problem of "decoding" ourselves -- understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome -- would surely be at the top. In 1956, the computer scientist John McCarthy coined the term "Artificial Intelligence" (AI) to describe the study of intelligence by implementing its essential features on a computer. Some of McCarthy's colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Skinner's approach stressed the historical associations between a stimulus and the animal's response -- an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past. Noam Chomsky, speaking in the symposium, wasn't so enthused.
On Intelligence - Welcome artifical intelligence diagram