background preloader

Knowledge representation and reasoning

Knowledge representation and reasoning
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) devoted to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. Examples of knowledge representation formalisms include semantic nets, Frames, Rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers. Overview[edit] This hypothesis was not always taken as a given by researchers. History[edit] Characteristics[edit] Related:  What is A.I ?...Cognitive Psychology

What is a Knowledge Representation? Randall Davis MIT AI Lab Howard Shrobe MIT AI Lab and Symbolics, Inc. Peter Szolovits MIT Lab for Computer Science This paper appeared as R. Abstract Although knowledge representation is one of the central and in some ways most familiar concepts in AI, the most fundamental question about it--What is it? IntroductionTerminology and PerspectiveWhat is a Knowledge Representation? Introduction What is a knowledge representation? A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it. Understanding the roles and acknowledging their diversity has several useful consequences. Second, we believe the roles provide a framework useful for characterizing a wide variety of representations. Finally, we believe that viewing representations in this way has consequences for both research and practice. Table I. Notes

Mental image "Mental images" redirects here. For the computer graphics software company, see Mental Images. A mental image or mental picture is the representation in a person's mind of the physical world outside of that person.[1] It is an experience that, on most occasions, significantly resembles the experience of perceiving some object, event, or scene, but occurs when the relevant object, event, or scene is not actually present to the senses.[2][3][4][5] There are sometimes episodes, particularly on falling asleep (hypnagogic imagery) and waking up (hypnopompic), when the mental imagery, being of a rapid, phantasmagoric and involuntary character, defies perception, presenting a kaleidoscopic field, in which no distinct object can be discerned.[6] The nature of these experiences, what makes them possible, and their function (if any) have long been subjects of research and controversy[further explanation needed] in philosophy, psychology, cognitive science, and, more recently, neuroscience.

State of the art The term "state of the art" refers to the highest level of general development, as of a device, technique, or scientific field achieved at a particular time. It also refers to the level of development (as of a device, procedure, process, technique, or science) reached at any particular time as a result of the common methodologies employed. The term has been used since 1910, and has become both a common term in advertising and marketing, and a legally significant phrase with respect to both patent law and tort liability. In advertising, the phrase is often used to convey that a product is made with the best possible technology, but it has been noted that "the term 'state of the art' requires little proof on the part of advertisers", as it is considered mere puffery.[1] The use of the term in patent law, by contrast, "does not connote even superiority, let alone the superlative quality the ad writers would have us ascribe to the term".[2] Origin and history[edit] Legal importance[edit]

A picture is worth a thousand words The expression "Use a picture. It's worth a thousand words." appears in a 1911 newspaper article quoting newspaper editor Arthur Brisbane discussing journalism and publicity.[1] 1913 newspaper advertisement A similar phrase, "One Look Is Worth A Thousand Words", appears in a 1913 newspaper advertisement for the Piqua Auto Supply House of Piqua, Ohio.[2] An early use of the exact phrase appears in an 1918 newspaper advertisement for the San Antonio Light which says: One of the Nation's Greatest Editors Says: One Picture is Worth a Thousand Words The San Antonio Light's Pictorial Magazine of the War Exemplifies the truth of the above statement—judging from the warm reception it has received at the hands of the Sunday Light readers.[3] It is believed by some that the modern use of the phrase stems from an article by Fred R. Another ad by Barnard appears in the March 10, 1927 issue with the phrase "One Picture Worth Ten Thousand Words," where it is labeled a Chinese proverb (一圖勝萬言).

Semantic Web Stack Semantic Web Stack The Semantic Web Stack, also known as Semantic Web Cake or Semantic Web Layer Cake, illustrates the architecture of the Semantic Web. Overview[edit] The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies that are standardized for Semantic Web are organized to make the Semantic Web possible. The illustration was created by Tim Berners-Lee.[1] The stack is still evolving as the layers are concretized.[2][3] Semantic Web technologies[edit] As shown in the Semantic Web Stack, the following languages or technologies are used to create Semantic Web. Hypertext Web technologies[edit] The bottom layers contain technologies that are well known from hypertext web and that without change provide basis for the semantic web. Internationalized Resource Identifier (IRI), generalization of URI, provides means for uniquely identifying semantic web resources. Notes[edit]

Artificial Intelligence Defining Artificial Intelligence The phrase “Artificial Intelligence” was first coined by John McCarthy four decades ago. One representative definition is pivoted around comparing intelligent machines with human beings. Yet none of these definitions have been universally accepted, probably because the reference of the word “intelligence” which is an immeasurable quantity. With all this a common questions arises: Does rational thinking and acting include all characteristics of an intelligent system? If so, how does it represent behavioral intelligence such as learning, perception and planning? If we think a little, a system capable of reasoning would be a successful planner. With all this we may conclude that a machine that lacks of perception cannot learn, therefore cannot acquire knowledge. General Problem Solving Approaches in AI To understand the practical meaning or “artificial intelligence” we must illustrate some common problems. Algorithm for solving state-space problems Begin Fig.

Dual-coding theory Dual-coding theory, a theory of cognition, was hypothesized by Allan Paivio of the University of Western Ontario in 1971. In developing this theory, Paivio used the idea that the formation of mental images aids in learning (Reed, 2010). According to Paivio, there are two ways a person could expand on learned material: verbal associations and visual imagery. Dual-coding theory postulates that both visual and verbal information is used to represent information (Sternberg, 2003). Visual and verbal information are processed differently and along distinct channels in the human mind, creating separate representations for information processed in each channel. There are limitations to the dual-coding theory. Types of Codes[edit] Analogue codes are used to mentally represent images. Symbolic codes are used to for mental representations of words. Support for this theory[edit] Psychology Support[edit] Cognitive Neuroscience Support[edit] Alternative Theory[edit] For further reading[edit] Anderson, J.

The Theory of Knowledge The Theory of Knowledge The Theory of KnowledgeWhat is Scientific Method?Limits of EmpiricismPrejudice Against DialecticsStalinist Caricature "It is the customary fate of new truths to begin as heresies and to end as superstitions." (T. H. The basic assumption underlying all science and rational thought in general is that the physical world exists, and that it is possible to understand the laws governing objective reality. "Indeed, it’s hard to imagine how science could exist if they didn’t. The same is true of the human race in general. It is the illusion of every epoch that it represents the ultimate peak of all human achievements and wisdom. The history of science shows how economical the human mind is. As Eric J. The development of science proceeds through an infinite series of successive approximations. "The first discoveries were realisation that each change of scale brought new phenomena and new kinds of behaviour. Should we therefore despair of ever achieving the whole truth?

Sociology of knowledge The sociology of knowledge is the study of the relationship between human thought and the social context within which it arises, and of the effects prevailing ideas have on societies. It is not a specialized area of sociology but instead deals with broad fundamental questions about the extent and limits of social influences on individual's lives and the social-cultural basics of our knowledge about the world.[1] Complementary to the sociology of knowledge is the sociology of ignorance[2] including the study of nescience, ignorance, knowledge gaps or non-knowledge as inherent features of knowledge making.[3] [4] [5] The sociology of knowledge was pioneered primarily by the sociologists Émile Durkheim and Marcel Mauss at the end of the 19th and beginning of the 20th centuries. Their works deal directly with how conceptual thought, language, and logic could be influenced by the sociological milieu out of which they arise. Schools[edit] Émile Durkheim[edit] Karl Mannheim[edit] Robert K.

Ontology (information science) In computer science and information science, an ontology formally represents knowledge as a hierarchy of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts.[1][2] Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework. The term ontology has its origin in philosophy and has been applied in many different ways. The word element onto- comes from the Greek ὤν, ὄντος, ("being", "that which is"), present participle of the verb εἰμί ("be"). According to Gruber (1993): Common components of ontologies include:

Expert system An expert system is divided into two sub-systems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging capabilities.[10] History[edit] Edward Feigenbaum in a 1977 paper said that the key insight of early expert systems was that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" (as paraphrased by Hayes-Roth, et al.) Expert systems were introduced by the Stanford Heuristic Programming Project led by Feigenbaum, who is sometimes referred to as the "father of expert systems". In addition to Feigenbaum key early contributors were Bruce Buchanan, Edward Shortliffe, Randall Davis, William vanMelle, and Carli Scott. In the 1980s, expert systems proliferated. Software architecture[edit] R1: Man(x) => Mortal(x) Truth Maintenance.

Related:  Logic, Reasoning, ArgumentationsortNews