background preloader

Artificial Inteligence

Facebook Twitter

» Inteligência artificial, Watson e o último xeque-mate Webinsider. 03 de março de 2011, 11:14 O encontro da genética com os bits, a fusão essencial. Por Ricardo Murer A Inteligência Artificial (IA) é uma área de pesquisa ao mesmo tempo fascinante e misteriosa, contaminada pela ficção científica a tal ponto que para muitos é impossível saber o que representa a verdade e o que é fruto da imaginação de escritores e diretores de Hollywood. O tema reencontrou a mídia recentemente, quando o computador “Watson” da IBM, uma máquina de “perguntas e respostas profundas” (DeepQA ), participou do programa de TV norte-americano Jeopardy!

Watson é na verdade um conjunto de 90 servidores, 16 Terabytes de memória e capacidade de processamento de 180.000 Gigabytes por segundo! Felizmente ainda não. Se no início das pesquisas em IA os cientistas desejavam reproduzir as leis do pensamento e construir uma máquina espelho do ser-humano, hoje a abordagem é diferente. Conheça os planos de hospedagem da HostLayer . Transderivational search. Transderivational search (often abbreviated to TDS) is a psychological and cybernetics term, meaning when a search is being conducted for a fuzzy match across a broad field. In computing the equivalent function can be performed using content-addressable memory. A psychological example of TDS is in Ericksonian hypnotherapy, where vague suggestions are used that the patient must process intensely in order to find their own meanings, thus ensuring that the practitioner does not intrude his own beliefs into the subject's inner world.

[citation needed] TDS in human communication and processing[edit] Because TDS is a compelling, automatic and unconscious state of internal focus and processing (i.e. a type of everyday trance state), and often a state of internal lack of certainty, or openness to finding an answer (since something is being checked out at that moment), it can be utilized or interrupted, in order to create, or deepen, trance. Examples[edit] Leading statements: Textual ambiguity: Autoassociative memory.

Autoassociative memory, also known as auto-association memory or an autoassociation network, is often misunderstood to be only a form of backpropagation or other neural networks. It is actually a more generic term that refers to all memories that enable one to retrieve a piece of data from only a tiny sample of itself. Traditional memory stores data at a unique address and can recall the data upon presentation of the complete unique address. Autoassociative memories are capable of retrieving a piece of data upon presentation of only partial information from that piece of data.

Heteroassociative memories, on the other hand, can recall an associated piece of datum from one category upon presentation of data from another category. Hopfield networks [1] have been shown [2] to act as autoassociative memory since they are capable of remembering data by observing a portion of that data. "A day that will live in ______""To be or not to be""I came, I saw, I conquered" See also[edit]

Hopfield net. A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982. Hopfield nets serve as content-addressable memory systems with binary threshold nodes. They are guaranteed to converge to a local minimum, but convergence to a false pattern (wrong local minimum) rather than the stored pattern (expected local minimum) can occur. Hopfield networks also provide a model for understanding human memory. Structure[edit] A Hopfield net with four nodes. The units in Hopfield nets are binary threshold units, i.e. the units only take on two different values for their states and the value is determined by whether or not the units' input exceeds their threshold.

Every pair of units i and j in a Hopfield network have a connection that is described by the connectivity weight. . , where is a set of McCulloch-Pitts neurons and is a function that links pairs of nodes to a real value, the connectivity weight. (no unit has a connection with itself) (connections are symmetric) Bidirectional associative memory. Topology[edit] A BAM contains two layers of neurons, which we shall denote X and Y. Layers X and Y are fully connected to each other. Once the weights have been established, input into layer X presents the pattern in layer Y, and vice versa. Procedure[edit] Learning[edit] Imagine we wish to store two associations, A1:B1 and A2:B2. These are then transformed into the bipolar forms: From there, we calculate where denotes the transpose. Recall[edit] To retrieve the association A1, we multiply it by M to get (4, 2, -2, -4), which, when run through a threshold, yields (1, 1, 0, 0), which is B1.

See also[edit] References[edit] External links[edit] Spacing effect. Researchers have offered several possible explanations of the spacing effect, and much research has been conducted that supports its impact on recall. In spite of these findings, the robustness of this phenomenon and its resistance to experimental manipulation have made empirical testing of its parameters difficult. Causes for spacing effect[edit] Decades of research on memory and recall have produced many different theories and findings on the spacing effect. In a study conducted by Cepeda et al. (2006) participants who used spaced practice on memory tasks outperformed those using massed practice in 259 out of 271 cases.

As different studies support different aspects of this effect, some now believe that an appropriate account should be multifactorial, and at present, different mechanisms are invoked to account for the spacing effect in free recall and in explicit cued-memory tasks. Not much attention has been given to the study of the spacing effect in long-term retention tests. Interference theory. Interference theory is theory regarding human memory. Interference occurs in learning when there is an interaction between the new material and transfer effects of past learned behavior, memories or thoughts that have a negative influence in comprehending the new material.[1] Bringing to memory old knowledge has the effect of impairing both the speed of learning and memory performance.

There are two main kinds of interference: proactive interference [see Proactive learning]retroactive interference [see Retroactive learning] The main assumption of interference theory is that the stored memory is intact but unable to be retrieved due to competition created by newly acquired information.[1] History[edit] John A. The next major advancement came from American psychologist Benton J. In 1924, James J. Proactive interference[edit] Proactive interference is the "forgetting [of information] due to interference from the traces of events or learning that occurred prior to the materials to be remembered.

Semantic reasoner. A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including Pei Wang's non-axiomatic reasoning system[citation needed], and Novamente's probabilistic logic network[citation needed].

List of semantic reasoners[edit] Existing semantic reasoners and related software: Commercial software[edit] Free to use (Closed Source)[edit] Free software (open source)[edit] See also[edit] References[edit] External links[edit] Intelligent agent. Simple reflex agent Intelligent agents are often described schematically as an abstract functional system similar to a computer program. For this reason, intelligent agents are sometimes called abstract intelligent agents (AIA)[citation needed] to distinguish them from their real world implementations as computer systems, biological systems, or organizations.

Some definitions of intelligent agents emphasize their autonomy, and so prefer the term autonomous intelligent agents. Still others (notably Russell & Norvig (2003)) considered goal-directed behavior as the essence of intelligence and so prefer a term borrowed from economics, "rational agent". Intelligent agents are also closely related to software agents (an autonomous computer program that carries out tasks on behalf of users). A variety of definitions[edit] Intelligent agents have been defined many different ways.[3] According to Nikola Kasabov[4] IA systems should exhibit the following characteristics: Structure of agents[edit]