03 de março de 2011, 11:14 O encontro da genética com os bits, a fusão essencial. Por Ricardo Murer » Inteligência artificial, Watson e o último xeque-mate Webinsider
Transderivational search Transderivational search (often abbreviated to TDS ) is a psychological and cybernetics term, meaning when a search is being conducted for a fuzzy match across a broad field. In computing the equivalent function can be performed using content-addressable memory . Unlike usual searches, which look for literal (i.e. exact, logical , or regular expression ) matches, a transderivational search is a search for a possible meaning or possible match as part of communication, and without which an incoming communication cannot be made any sense of whatsoever. It is thus an integral part of processing language , and of attaching meaning to communication . A psychological example of TDS is in Ericksonian hypnotherapy , where vague suggestions are used that the patient must process intensely to find their own meanings for, thus ensuring that the practitioner does not intrude his own beliefs into the subject's inner world. [ citation needed ]
Autoassociative memory , also known as auto-association memory or an autoassociation network , is often misunderstood to be only a form of backpropagation or other neural networks . It is actually a more generic term that refers to all memories that enable one to retrieve a piece of data from only a tiny sample of itself. Traditional memory stores data at a unique address and can recall the data upon presentation of the complete unique address. Autoassociative memories are capable of retrieving a piece of data upon presentation of only partial information from that piece of data. Heteroassociative memories , on the other hand, can recall an associated piece of datum from one category upon presentation of data from another category. Autoassociative memory
A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield . Hopfield nets serve as content-addressable memory systems with binary threshold nodes . They are guaranteed to converge to a local minimum , but convergence to a false pattern (wrong local minimum) rather than the stored pattern (expected local minimum) can occur. Hopfield net
Bidirectional associative memory Bidirectional associative memory ( BAM ) is a type of recurrent neural network . BAM was introduced by Bart Kosko in 1988. [ 1 ] There are two types of associative memory, auto-associative and hetero-associative. BAM is hetero-associative, meaning given a pattern it can return another pattern which is potentially of a different size. It is similar to the Hopfield network in that they are both forms of associative memory . However, Hopfield nets return patterns of the same size.
In the field of psychology , the spacing effect is the phenomenon whereby animals (including humans) more easily remember or learn items when they are studied a few times spaced over a long time span (" spaced presentation ") rather than repeatedly studied in a short span of time (" massed presentation "). Practically, this effect suggests that " cramming " (intense, last-minute studying ) the night before an exam is not likely to be as effective as studying at intervals in a longer time frame. Important to note, however, is that the benefit of spaced presentations does not appear at short retention intervals, in which massed presentations tend to lead to better memory performance. The phenomenon was first identified by Hermann Ebbinghaus , and his detailed study of it was published in the 1885 book Über das Gedächtnis. Untersuchungen zur experimentellen Psychologie ( Memory: A Contribution to Experimental Psychology ). Spacing effect
Interference theory Interference theory is theory regarding human memory . Interference occurs in learning when there is an interaction between the new material and transfer effects of past learned behavior, memories or thoughts that have a negative influence in comprehending the new material. [ 1 ] Bringing to memory old knowledge has the effect of impairing both the speed of learning and memory performance. There are three main kinds of interference: proactive interference retroactive interference latent interference The main assumption of interference theory is that the stored memory is intact but unable to be retrieved due to competition created by newly acquired information. [ 1 ]
Semantic reasoner A semantic reasoner , reasoning engine , rules engine , or simply a reasoner , is a piece of software able to infer logical consequences from a set of asserted facts or axioms . The notion of a semantic reasoner generalizes that of an inference engine , by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language , and often a description language . Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining .
Intelligent agent In artificial intelligence , an intelligent agent ( IA ) is an autonomous entity which observes through sensors and acts upon an environment using actuators (i.e. it is an agent ) and directs its activity towards achieving goals (i.e. it is rational ). [ 1 ] Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex : a reflex machine such as a thermostat is an intelligent agent, [ 2 ] as is a human being, as is a community of human beings working together towards a goal. Simple reflex agent Intelligent agents are often described schematically as an abstract functional system similar to a computer program .