background preloader

☢️ Cognitive

Facebook Twitter

◥ University. {q} PhD. {t} Themes. {t} AI. Cognitive therapy. Cognitive therapy (CT) is a type of psychotherapy developed by American psychiatrist Aaron T. Beck. CT is one of the therapeutic approaches within the larger group of cognitive behavioral therapies (CBT) and was first expounded by Beck in the 1960s.

Cognitive therapy is based on the cognitive model, which states that thoughts, feelings and behavior are all connected, and that individuals can move toward overcoming difficulties and meeting their goals by identifying and changing unhelpful or inaccurate thinking, problematic behavior, and distressing emotional responses. As an example of how CT works might work: Having made a mistake at work, a man may believe, "I'm useless and can't do anything right at work. " People who are working with a cognitive therapist often practice the use of more flexible ways to think and respond, learning to ask themselves whether their thoughts are completely true, and whether those thoughts are helping them to meet their goals.

History[edit] Types[edit] Cognitive architecture. Distinctions[edit] Some well-known cognitive architectures[edit] See also[edit] ACT-R. Most of the ACT-R basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen and described as a way of specifying how the brain itself is organized in a way that enables individual processing modules to produce cognition.

Inspiration[edit] What ACT-R looks like[edit] This means that any researcher may download the ACT-R code from the ACT-R website, load it into a Lisp distribution, and gain full access to the theory in the form of the ACT-R interpreter. Also, this enables researchers to specify models of human cognition in the form of a script in the ACT-R language. The language primitives and data-types are designed to reflect the theoretical assumptions about human cognition.

These assumptions are based on numerous facts derived from experiments in cognitive psychology and brain imaging. In recent years, ACT-R has also been extended to make quantitative predictions of patterns of activation in the brain, as detected in experiments with fMRI. Cognitive model. A cognitive model is an approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. History[edit] Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence to name a few. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).

Box-and-arrow models[edit] A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Computational models[edit] Symbolic[edit] Subsymbolic[edit] Hybrid[edit] Dynamical systems[edit] Locomotion[edit] Natural language processing. Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation.

History[edit] The history of NLP generally starts in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. NLP using machine learning[edit] Major tasks in NLP[edit] Parsing. Stochastic process. Stock market fluctuations have been modeled by stochastic processes. In probability theory, a stochastic process /stoʊˈkæstɪk/, or sometimes random process (widely used) is a collection of random variables; this is often used to represent the evolution of some random value, or system, over time. This is the probabilistic counterpart to a deterministic process (or deterministic system).

Instead of describing a process which can only evolve in one way (as in the case, for example, of solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy: even if the initial condition (or starting point) is known, there are several (often infinitely many) directions in which the process may evolve. Formal definition and basic properties[edit] Definition[edit] Given a probability space and a measurable space , an S-valued stochastic process is a collection of S-valued random variables on , indexed by a totally ordered set T ("time"). Where each . . . . . . Probability matching. Probability matching is a suboptimal decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, then the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.

The optimal Bayesian decision strategy (to maximize the number of correct predictions, see Duda, Hart & Stork (2001)) in such a case is to always predict "positive" (i.e., predict the majority category in the absence of other information), which has 60% chance of winning rather than matching which has 52% of winning (where p is the probability of positive realization, the result of matching would be , here ). fLIF Neurons.

Hebbian theory. Hebbian theory is a theory in neuroscience which proposes an explanation for the adaptation of neurons in the brain during the learning process. It describes a basic mechanism for synaptic plasticity, where an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in his 1949 book The Organization of Behavior,[1] the theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows: "Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.

"[1] Hebbian engrams and cell assembly theory[edit] where and . The. Cell assemblies. Figure 1: The activity in a cell assembly according to Hebb. Figure and legend are copied from (Hebb 1949). It is not clear from Hebb’s writing whether each node is a single neuron, a group of neurons or a small network of neurons. The concept of cell assembly was coined by the Canadian neuropsychologist D. O. Hebb (Hebb 1949) to describe a network of neurons that is being activated repeatedly during a certain mental process and in this way the excitatory synaptic connections among its members are being strengthened. In Hebb’s thinking the synaptic strengthening depends on the order of activation, and thus there is a time structure to the activation of cell assembly.

Thus, the activity of a cell assembly is characterized by the spatiotemporal structure of the activity of its members. Present-day evolution of the concept Nowadays the concept of cell assembly is used loosely to describe a group of neurons that perform a given action or represent a given percept or concept in the brain. Compensatory Learning Rule. Stroop effect. Effect of psychological interference on reaction time Green Red BluePurple Red Purple Mouse Top FaceMonkey Top Monkey Naming the font color of a printed word is an easier and quicker task if word meaning and font color are congruent.

If two words are both printed in red, the average time to say "red" in response to the written word "green" is greater than the time to say "red" in response to the written word "mouse". In psychology, the Stroop effect is the delay in reaction time between congruent and incongruent stimuli. The effect has been used to create a psychological test (the Stroop test) that is widely used in clinical practice and investigation. A basic task that demonstrates this effect occurs when there is a mismatch between the name of a color (e.g., "blue", "green", or "red") and the color it is printed on (i.e., the word "red" printed in blue ink instead of red ink).

Original experiment[edit] Stimulus 1: Purple Brown Red Blue Green Stimulus 2: Brown GreenBlueGreen Neuroanatomy[edit]