background preloader

Knowledge representation and reasoning

Knowledge representation and reasoning
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) devoted to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. Examples of knowledge representation formalisms include semantic nets, Frames, Rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers. Overview[edit] This hypothesis was not always taken as a given by researchers. History[edit] Characteristics[edit]

http://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning

Related:  What is A.I ?...Cognitive Psychologyalebarros

What is a Knowledge Representation? Randall Davis MIT AI Lab Howard Shrobe MIT AI Lab and Symbolics, Inc. Peter Szolovits MIT Lab for Computer Science This paper appeared as R. Davis, H. Mental image "Mental images" redirects here. For the computer graphics software company, see Mental Images. A mental image or mental picture is the representation in a person's mind of the physical world outside of that person.[1] It is an experience that, on most occasions, significantly resembles the experience of perceiving some object, event, or scene, but occurs when the relevant object, event, or scene is not actually present to the senses.[2][3][4][5] There are sometimes episodes, particularly on falling asleep (hypnagogic imagery) and waking up (hypnopompic), when the mental imagery, being of a rapid, phantasmagoric and involuntary character, defies perception, presenting a kaleidoscopic field, in which no distinct object can be discerned.[6] The nature of these experiences, what makes them possible, and their function (if any) have long been subjects of research and controversy[further explanation needed] in philosophy, psychology, cognitive science, and, more recently, neuroscience.

Plant Models It can be challenging to derive a plant model that is of sufficient fidelity to be useful, capturing the non-linear effects and system dynamics that are important for control system design. In addition to the powerful Enginuity tool for engine controls development, SimuQuest has developed a number of other plant models (and corresponding algorithms). These plant and algorithm models, implemented in native Simulink, are commercially available and have saved clients a lot of valuable development time. SimuQuest plant models have been developed and validated against dynamic measurement data to achieve sufficient fidelity for comprehensive model-based design. Scroll down to see examples of vehicle body and powertrain plant models available off the shelf. Vehicle Body Plant Models

Dependency Parsing: Recent Advances (Artificial Intelligence) Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Lexicon Formally, in linguistics, a lexicon is a language's inventory of lexemes. The word "lexicon" derives from the Greek λεξικόν (lexicon), neuter of λεξικός (lexikos) meaning "of or for words".[1] Linguistic theories generally regard human languages as consisting of two parts: a lexicon, essentially a catalogue of a language's words (its wordstock); and a grammar, a system of rules which allow for the combination of those words into meaningful sentences. The lexicon is also thought to include bound morphemes, which cannot stand alone as words (such as most affixes).

Artificial Intelligence Defining Artificial Intelligence The phrase “Artificial Intelligence” was first coined by John McCarthy four decades ago. One representative definition is pivoted around comparing intelligent machines with human beings. Another definition is concerned with the performance of machines which historically have been judged to lie within the domain of intelligence. Dual-coding theory Dual-coding theory, a theory of cognition, was hypothesized by Allan Paivio of the University of Western Ontario in 1971. In developing this theory, Paivio used the idea that the formation of mental images aids in learning (Reed, 2010). According to Paivio, there are two ways a person could expand on learned material: verbal associations and visual imagery. Dual-coding theory postulates that both visual and verbal information is used to represent information (Sternberg, 2003). Visual and verbal information are processed differently and along distinct channels in the human mind, creating separate representations for information processed in each channel. The mental codes corresponding to these representations are used to organize incoming information that can be acted upon, stored, and retrieved for subsequent use.

Teknisk IT: Model-based systems engineering with SysML SysML is a light weight modelling language compared to UML. From document-oriented to model-based approach. A model-based approach requires modelling concepts and tools (an alternative to the document-oriented). MBSE: producing and controlling a coherent system model as opposed to a coherent set of documents. SysML is created to realize an MBSE approach based on a system model of the wanted system. Ralph Debusmann - Extensible Dependency Grammar (XDG) Extensible Dependency Grammar (XDG) is a general framework for dependency grammar, with multiple levels of linguistic representations called dimensions, e.g. grammatical function, word order, predicate-argument structure, scope structure, information structure and prosodic structure. It is articulated around a graph description language for multi-dimensional attributed labeled graphs. An XDG grammar is a constraint that describes the valid linguistic signs as n-dimensional attributed labeled graphs, i.e. n-tuples of graphs sharing the same set of attributed nodes, but having different sets of labeled edges. All aspects of these signs are stipulated explicitly by principles: the class of models for each dimension, additional properties that they must satisfy, how one dimension must relate to another, and even lexicalization. XDG-related papers: My Papers and XDG papers by other researchers.

Expert system An expert system is divided into two sub-systems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging capabilities.[10] History[edit] Media psychology Media psychology seeks to understand how media as a factor in the growing use of technology impacts how people perceive, interpret, respond, and interact in a media rich world. Media psychologists typically focus on identifying potential benefits and negative consequences of all forms of technology and work to promote and develop positive media use and applications.[1][2][3] The term 'media psychology' is often confusing because many people associate 'media' with mass media rather than technology. Many even have the idea that media psychology is more about appearing in the media than anything else. The 'media' in Media Psychology means 'mediated experience' not any single kind of media or technology. It applies to the development and use of everything from traditional media, such as print and radio, to the expanding landscape of new technologies such as virtual worlds, augmented reality and mobile applications and interfaces.[4]

Related:  Faceted ClassificationK Theories