» Inteligência artificial, Watson e o último xeque-mate Webinsider. 03 de março de 2011, 11:14 O encontro da genética com os bits, a fusão essencial.
Por Ricardo Murer A Inteligência Artificial (IA) é uma área de pesquisa ao mesmo tempo fascinante e misteriosa, contaminada pela ficção científica a tal ponto que para muitos é impossível saber o que representa a verdade e o que é fruto da imaginação de escritores e diretores de Hollywood. O tema reencontrou a mídia recentemente, quando o computador “Watson” da IBM, uma máquina de “perguntas e respostas profundas” (DeepQA ), participou do programa de TV norte-americano Jeopardy! Vencendo seus dois concorrentes humanos. Watson é na verdade um conjunto de 90 servidores, 16 Terabytes de memória e capacidade de processamento de 180.000 Gigabytes por segundo!
Felizmente ainda não. Se no início das pesquisas em IA os cientistas desejavam reproduzir as leis do pensamento e construir uma máquina espelho do ser-humano, hoje a abordagem é diferente. Conheça os planos de hospedagem da HostLayer . Transderivational search. Transderivational search (often abbreviated to TDS) is a psychological and cybernetics term, meaning when a search is being conducted for a fuzzy match across a broad field.
In computing the equivalent function can be performed using content-addressable memory. A psychological example of TDS is in Ericksonian hypnotherapy, where vague suggestions are used that the patient must process intensely in order to find their own meanings, thus ensuring that the practitioner does not intrude his own beliefs into the subject's inner world.  TDS in human communication and processing Because TDS is a compelling, automatic and unconscious state of internal focus and processing (i.e. a type of everyday trance state), and often a state of internal lack of certainty, or openness to finding an answer (since something is being checked out at that moment), it can be utilized or interrupted, in order to create, or deepen, trance. Autoassociative memory. Autoassociative memory, also known as auto-association memory or an autoassociation network, is often misunderstood to be only a form of backpropagation or other neural networks.
It is actually a more generic term that refers to all memories that enable one to retrieve a piece of data from only a tiny sample of itself. Traditional memory stores data at a unique address and can recall the data upon presentation of the complete unique address. Autoassociative memories are capable of retrieving a piece of data upon presentation of only partial information from that piece of data. Heteroassociative memories, on the other hand, can recall an associated piece of datum from one category upon presentation of data from another category. Hopfield net. A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Hopfield nets serve as content-addressable memory systems with binary threshold nodes. They are guaranteed to converge to a local minimum, but convergence to a false pattern (wrong local minimum) rather than the stored pattern (expected local minimum) can occur. Bidirectional associative memory. Topology A BAM contains two layers of neurons, which we shall denote X and Y.
Layers X and Y are fully connected to each other. Once the weights have been established, input into layer X presents the pattern in layer Y, and vice versa. Procedure Learning Imagine we wish to store two associations, A1:B1 and A2:B2. These are then transformed into the bipolar forms: From there, we calculate where denotes the transpose. Spacing effect. Researchers have offered several possible explanations of the spacing effect, and much research has been conducted that supports its impact on recall.
In spite of these findings, the robustness of this phenomenon and its resistance to experimental manipulation have made empirical testing of its parameters difficult. Causes for spacing effect Decades of research on memory and recall have produced many different theories and findings on the spacing effect. In a study conducted by Cepeda et al. (2006) participants who used spaced practice on memory tasks outperformed those using massed practice in 259 out of 271 cases.
As different studies support different aspects of this effect, some now believe that an appropriate account should be multifactorial, and at present, different mechanisms are invoked to account for the spacing effect in free recall and in explicit cued-memory tasks. Not much attention has been given to the study of the spacing effect in long-term retention tests. Interference theory. Interference theory is theory regarding human memory.
Semantic reasoner. A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms.
The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including Pei Wang's non-axiomatic reasoning system, and Novamente's probabilistic logic network. Intelligent agent. Simple reflex agent Intelligent agents are often described schematically as an abstract functional system similar to a computer program.
For this reason, intelligent agents are sometimes called abstract intelligent agents (AIA) to distinguish them from their real world implementations as computer systems, biological systems, or organizations. Some definitions of intelligent agents emphasize their autonomy, and so prefer the term autonomous intelligent agents.