background preloader

IBM Watson

IBM Watson
Related:  The Great Conversation Bot

Read the Web :: Carnegie Mellon University Browse the Knowledge Base! Can computers learn to read? We think so. "Read the Web" is a research project that attempts to create a computer system that learns over time to read the web. Since January 2010, our computer system called NELL (Never-Ending Language Learner) has been running continuously, attempting to perform two tasks each day: First, it attempts to "read," or extract facts from text found in hundreds of millions of web pages (e.g., playsInstrument(George_Harrison, guitar)). So far, NELL has accumulated over 50 million candidate beliefs by reading the web, and it is considering these at different levels of confidence.

Case-Based Reasoning Case-based reasoning is one of the fastest growing areas in the field of knowledge-based systems and this book, authored by a leader in the field, is the first comprehensive text on the subject. Case-based reasoning systems are systems that store information about situations in their memory. As new problems arise, similar situations are searched out to help solve these problems. Problems are understood and inferences are made by finding the closest cases in memory, comparing and contrasting the problem with those cases, making inferences based on those comparisons, and asking questions when inferences can't be made. This book presents the state of the art in case-based reasoning. This book is an excellent text for courses and tutorials on case-based reasoning. The Process of Question Answering. ions - Search all of the collections listed below at once. Technical Reports - Scientific and technical (S&T) reports conveying results of Defense-sponsored research, development, test and evaluation (RDT&E) efforts on a wide range of topics. Collection includes both citations and many full-text, downloadable documents from mid-1900s to present.

Conceptual dependency theory From Wikipedia, the free encyclopedia Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems. Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence.[1] This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner. Schank developed the model to represent knowledge for natural language input into computers. The model uses the following basic representational tokens:[3] real world objects, each with some attributes.real world actions, each with attributestimeslocations A set of conceptual transitions then act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary.

Direct Memory Access Parsing (DMAP) A Direct Memory Access Parser reads text and identifies the concepts in memory that text refers to. It does this by matching phrasal patterns attached to those concepts (mops). Attaching Phrases to Concepts For example, suppose we wanted to read texts about economic arguments, as given by people such as Milton Friedman and Lester Thurow. The first thing we have to do is define concepts for those arguments, those economists, and for the event of economists presenting arguments. Next we have to attach to these concepts phrases that are used to refer to them. More complex concepts, such as a change in an economic variable, or a communication about an event, require phrasal patterns . For example, the concept m-change-event has the role :variable which can be filled by any m-variable , such as m-interest-rates . The Concept Recognition Algorithm From the Friedman example, we can see that we want the following kinds of events to occur: Getting Output from DMAP with-monitors is a macro.

Universal Networking Language Universal Networking Language (UNL) is a declarative formal language specifically designed to represent semantic data extracted from natural language texts. It can be used as a pivot language in interlingual machine translation systems or as a knowledge representation language in information retrieval applications. In UNL, the information conveyed by the natural language is represented sentence by sentence as a hypergraph composed of a set of directed binary labeled links between nodes or hypernodes. As an example, the English sentence "The sky was blue?!" can be represented in UNL as follows: In the example above, sky(icl>natural world) and blue(icl>color), which represent individual concepts, are UW's attributes of an object directed to linking the semantic relation between the two UWs; "@def", "@interrogative", "@past", "@exclamation" and "@entry" are attributes modifying UWs. UWs are expressed in natural language to be humanly readable.

Universal Networking Language (UNL) Universal Networking Language (UNL) is an Interlingua developed by UNDL foundation. UNL is in the form of semantic network to represent and exchange information. Concepts and relations enable encapsulation of the meaning of sentences. In UNL, a sentence can be considered as a hypergraph where each node is the concept and the links or arcs represent the relations between the concepts. The UNL consists of Universal Words (UWs), Relations and Attributes and knowledge base. Universal Words (UWs) Universal words are UNL words that carry knowledge or concepts. Examples: bucket(icl>container) water(icl>liquid) Relations Relations are labelled arcs that connect nodes (Uws) in the UNL graph. Examples: agt ( break(agt>thing,obj>thing), John(iof>person) ) Attributes Attributes are annotations used to represent grammatical categories, mood, aspect, etc. Example: work(agt>human). Knowledge Base The UNL Knowledge Base contains entries that define possible binary relations between UWs.

In-Depth Understanding This book describes a theory of memory representation, organization, and processing for understanding complex narrative texts. The theory is implemented as a computer program called BORIS which reads and answers questions about divorce, legal disputes, personal favors, and the like. The system is unique in attempting to understand stories involving emotions and in being able to deduce adages and morals, in addition to answering fact and event based questions about the narratives it has read. In-Depth Understanding is included in The MIT Press Artificial Intelligence Series.

Great Books of the Western World The Great Books (second edition) Great Books of the Western World is a series of books originally published in the United States in 1952, by Encyclopædia Britannica, Inc., to present the Great Books in a 54-volume set. The original editors had three criteria for including a book in the series: the book must be relevant to contemporary matters, and not only important in its historical context; it must be rewarding to re-read; and it must be a part of "the great conversation about the great ideas", relevant to at least 25 of the 102 great ideas identified by the editors. The books were not chosen on the basis of ethnic and cultural inclusiveness, historical influence, or the editors' agreement with the views expressed by the authors.[1] A second edition was published in 1990 in 60 volumes. Some translations were updated, some works were removed, and there were significant additions from the 20th century. History[edit] Volumes[edit] Volume 1[edit] The Great Conversation Volume 2[edit] Works[edit]

A Syntopicon: An Index to The Great Ideas A Syntopicon: An Index to The Great Ideas (1952) is a two-volume index, published as volumes 2 and 3 of Encyclopædia Britannica’s collection Great Books of the Western World. Compiled by Mortimer Adler, an American philosopher, under the guidance of Robert Hutchins, president of the University of Chicago, the volumes were billed as a collection of the 102 great ideas of the western canon. The term “syntopicon” was coined specifically for this undertaking, meaning “a collection of topics.”[1] The volumes catalogued what Adler and his team deemed to be the fundamental ideas contained in the works of the Great Books of the Western World, which stretched chronologically from Homer to Freud. The Syntopicon lists, under each idea, where every occurrence of the concept can be located in the collection’s famous works. History[edit] The Syntopicon was created to set the Great Books collection apart from previously published sets (such as Harvard Classics). Purpose[edit] Content[edit] [edit]

Great Conversation From Wikipedia, the free encyclopedia Concept in the philosophy of literature The Great Conversation is the ongoing process of writers and thinkers referencing, building on, and refining the work of their predecessors. This process is characterized by writers in the Western canon making comparisons and allusions to the works of earlier writers and thinkers. As such it is a name used in the promotion of the Great Books of the Western World published by Encyclopædia Britannica Inc. in 1952. It is also the title of (i) the first volume of the first edition of this set of books,[1][2] written by the educational theorist Robert Maynard Hutchins, and (ii) an accessory volume to the second edition (1990), written by the philosopher Mortimer J. According to Hutchins, "The tradition of the West is embodied in the Great Conversation that began in the dawn of history and that continues to the present day".[3] Adler said, See also[edit] Notes[edit] External links[edit]

Noet.com The Great Conversation by Robert Hutchins

Related: