background preloader

Computational Neuroscience

Facebook Twitter

A Working Brain Model - Page 2. An ambitious project to create an accurate computer model of the brain has reached an impressive milestone. Scientists in Switzerland working with IBM researchers have shown that their computer simulation of the neocortical column, arguably the most complex part of a mammal’s brain, appears to behave like its biological counterpart. By demonstrating that their simulation is realistic, the researchers say, these results suggest that an entire mammal brain could be completely modeled within three years, and a human brain within the next decade.

“What we’re doing is reverse-engineering the brain,” says Henry Markram, codirector of the Brain Mind Institute at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, who led the work, called the Blue Brain project, which began in 2005. (See “IBM: The Computer Brain.”) The model of part of the brain was completed last year, says Markram. “It’s amazing work,” says Thomas Serre, a computational-neuroscience researcher at MIT.

Neurosciences computationnelles. Un article de Wikipédia, l'encyclopédie libre. Les neurosciences computationnelles sont un champ de recherche des neurosciences qui s'applique à découvrir les principes computationnels des fonctions cérébrales et de l'activité neuronale, c'est-à-dire des algorithmes génériques qui permettent de comprendre l'implémentation dans notre système nerveux central de nos fonctions cognitives. Ce but a été défini en premier lieu par David Marr dans une série d'articles fondateurs. On est devant visant à comprendre le cerveau à l'aide de modèles des sciences informatiques et combiner expérimentation avec le travail théorie et les simulations numériques[1]. Historiquement, un des premiers modèles introduits en neurosciences computationnelles est le modèle « intègre et tire » par Louis Lapicque en 1907[2].

Les neurosciences computationnelles ne sont pas incluses dans la bio-informatique dont le champ recouvre les applications informatiques en biochimie, génétique, et phylogénie. Stanford Cognitive & Systems Neuroscience Lab - Stanford University School of Medicine. CCNBook/Sims/Language/Dyslexia - Computational Cognitive Neuroscience Wiki. The project file: dyslex.proj (click and Save As to download, then open in Emergent IMPORTANT: this project requires at least version 5.3.0, which fixes the unit lesioning functionality. Additional file for pretrained weights (required): dyslex_trained.wts.gz Back to CCNBook/Sims/All or Language Chapter.

This model simulates normal and disordered (dyslexic) reading performance in terms of a distributed representation of word-level knowledge across Orthography, Semantics, and Phonology. It is based on a model by Plaut and Shallice (1993). Note that this form of dyslexia is aquired (via brain lesions such as stroke) and not the more prevalent developmental variety. Because the network takes some time to train (for 250 epochs), we will just load in a pre-trained network to begin with. Normal Reading Performance For our initial exploration, we will just observe the behavior of the network as it "reads" the words presented to the orthographic input layer. Reading with Complete Pathway Lesions. Cognitive model. A cognitive model is an approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable.

History[edit] Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence to name a few. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).

Box-and-arrow models[edit] A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Computational models[edit] Symbolic[edit] Subsymbolic[edit] Hybrid[edit] Dynamical systems[edit] Locomotion[edit] Cognitive architecture. Distinctions[edit] Some well-known cognitive architectures[edit] See also[edit] Computational-representational understanding of mind. Computational representational understanding of mind (abbreviated CRUM) is a hypothesis in cognitive science which proposes that thinking is performed by computations operating on representations. This hypothesis assumes that the mind has mental representations analogous to data structures and computational procedures analogous to algorithms, such that computer programs using algorithms applied to data structures can model the mind and its processes. CRUM takes into consideration several theoretical approaches of understanding human cognition, including logic, rule, concept, analogy, image, and connection based systems.

These serve as the representation aspects of CRUM theory which are then acted upon to simulate certain aspects of human cognition, such as the use of rule-based systems in neuroeconomics. There is much disagreement on this hypothesis, but CRUM has been the most theoretically and experimentally successful approach to mind ever developed (Paul Thagard, 2005). See also[edit] Cognitive Modeling. Tu, Th 14:00-15:15 ECOT 831 Instructors Course Overview Cognitive modeling involves the design of computer simulation and mathematical models of human cognition and perception. The goals of cognitive modeling include: understanding mechanisms of information processing in the human brain interpreting behavioral, neuropsychological, and neuroscientific data suggesting techniques for remediation of cognitive deficits due to brain injury and developmental disorders suggesting techniques for facilitating learning in normal cognition constructing computer architectures to mimic human-like intelligence.

The range of modeling tools in cogntiive science are vast, and include production systems (sequential rule fiiring), neural networks, Bayesian probabilistic models, and pure mathematical theories. All of these tools share the following virtues: Models force you to be explicit about your hypotheses and assumptions. Models provide a framework for integrating knowledge from various fields. Memory.