background preloader

Cognitive architecture

Cognitive architecture
Distinctions[edit] Some well-known cognitive architectures[edit] See also[edit]

ACT-R Most of the ACT-R basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen and described as a way of specifying how the brain itself is organized in a way that enables individual processing modules to produce cognition. Inspiration[edit] What ACT-R looks like[edit] This means that any researcher may download the ACT-R code from the ACT-R website, load it into a Lisp distribution, and gain full access to the theory in the form of the ACT-R interpreter. Also, this enables researchers to specify models of human cognition in the form of a script in the ACT-R language. Like a programming language, ACT-R is a framework: for different tasks (e.g., Tower of Hanoi, memory for text or for list of words, language comprehension, communication, aircraft controlling), researchers create "models" (i.e., programs) in ACT-R. Brief outline[edit] There are two types of modules: All the modules can only be accessed through their buffers. Applications[edit] Notes[edit]

Cognitive model A cognitive model is an approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. History[edit] Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence to name a few. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard). Box-and-arrow models[edit] A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Computational models[edit] Symbolic[edit] Subsymbolic[edit] Hybrid[edit] Dynamical systems[edit] Locomotion[edit]

Natural language processing Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation. History[edit] The history of NLP generally starts in the 1950s, although work can be found from earlier periods. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. Some notably successful NLP systems developed in the 1960s were SHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies, and ELIZA, a simulation of a Rogerian psychotherapist, written by Joseph Weizenbaum between 1964 to 1966. NLP using machine learning[edit] Parsing

Stochastic process Stock market fluctuations have been modeled by stochastic processes. In probability theory, a stochastic process /stoʊˈkæstɪk/, or sometimes random process (widely used) is a collection of random variables; this is often used to represent the evolution of some random value, or system, over time. This is the probabilistic counterpart to a deterministic process (or deterministic system). Instead of describing a process which can only evolve in one way (as in the case, for example, of solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy: even if the initial condition (or starting point) is known, there are several (often infinitely many) directions in which the process may evolve. Formal definition and basic properties[edit] Definition[edit] Given a probability space and a measurable space , an S-valued stochastic process is a collection of S-valued random variables on , indexed by a totally ordered set T ("time"). where each . . . . . .

Probability matching Probability matching is a suboptimal decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, then the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances. The optimal Bayesian decision strategy (to maximize the number of correct predictions, see Duda, Hart & Stork (2001)) in such a case is to always predict "positive" (i.e., predict the majority category in the absence of other information), which has 60% chance of winning rather than matching which has 52% of winning (where p is the probability of positive realization, the result of matching would be , here ).

Hebbian theory Hebbian theory is a theory in neuroscience which proposes an explanation for the adaptation of neurons in the brain during the learning process. It describes a basic mechanism for synaptic plasticity, where an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in his 1949 book The Organization of Behavior,[1] the theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. "Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased Hebbian engrams and cell assembly theory[edit] Principles[edit] where to neuron and . the . .

Cell assemblies Figure 1: The activity in a cell assembly according to Hebb. Figure and legend are copied from (Hebb 1949). It is not clear from Hebb’s writing whether each node is a single neuron, a group of neurons or a small network of neurons. The concept of cell assembly was coined by the Canadian neuropsychologist D. Present-day evolution of the concept Nowadays the concept of cell assembly is used loosely to describe a group of neurons that perform a given action or represent a given percept or concept in the brain. A moto-neuron pool (i.e. all neurons whose axons connect to the same muscle) share clear common action, yet one would hesitate to call such a pool cell assembly. In the examples above one seems to treat differently excitatory and inhibitory interactions. From the examples above one gets the impression that we expect to see strong mutual excitatory connections among the members of the cell assembly. References Hebb D.O. See also Neuron, Synfire chain

Stroop effect Effect of psychological interference on reaction time Green Red BluePurple Red Purple Mouse Top FaceMonkey Top Monkey Naming the font color of a printed word is an easier and quicker task if word meaning and font color are congruent. If two words are both printed in red, the average time to say "red" in response to the written word "green" is greater than the time to say "red" in response to the written word "mouse". In psychology, the Stroop effect is the delay in reaction time between congruent and incongruent stimuli. The effect has been used to create a psychological test (the Stroop test) that is widely used in clinical practice and investigation. A basic task that demonstrates this effect occurs when there is a mismatch between the name of a color (e.g., "blue", "green", or "red") and the color it is printed on (i.e., the word "red" printed in blue ink instead of red ink). Original experiment[edit] Stimulus 1: Purple Brown Red Blue Green Stimulus 2: Brown GreenBlueGreen Neuroanatomy[edit]

Cognitive therapy Cognitive therapy (CT) is a type of psychotherapy developed by American psychiatrist Aaron T. Beck. CT is one of the therapeutic approaches within the larger group of cognitive behavioral therapies (CBT) and was first expounded by Beck in the 1960s. Cognitive therapy is based on the cognitive model, which states that thoughts, feelings and behavior are all connected, and that individuals can move toward overcoming difficulties and meeting their goals by identifying and changing unhelpful or inaccurate thinking, problematic behavior, and distressing emotional responses. As an example of how CT works might work: Having made a mistake at work, a man may believe, "I'm useless and can't do anything right at work." People who are working with a cognitive therapist often practice the use of more flexible ways to think and respond, learning to ask themselves whether their thoughts are completely true, and whether those thoughts are helping them to meet their goals. History[edit] Types[edit]

Related: