background preloader

Kwabena Boahen on a computer that works like the brain

Kwabena Boahen on a computer that works like the brain

http://www.ted.com/talks/kwabena_boahen_on_a_computer_that_works_like_the_brain.html

Related:  Artificial Intelligence

Bionics Bionics (also known as bionical creativity engineering) is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology.[citation needed] The transfer of technology between lifeforms and manufactures is, according to proponents of bionic technology, desirable because evolutionary pressure typically forces living organisms, including fauna and flora, to become highly optimized and efficient. A classical example is the development of dirt- and water-repellent paint (coating) from the observation that the surface of the lotus flower plant is practically unsticky for anything (the lotus effect).[citation needed].

Neuron All neurons are electrically excitable, maintaining voltage gradients across their membranes by means of metabolically driven ion pumps, which combine with ion channels embedded in the membrane to generate intracellular-versus-extracellular concentration differences of ions such as sodium, potassium, chloride, and calcium. Changes in the cross-membrane voltage can alter the function of voltage-dependent ion channels. If the voltage changes by a large enough amount, an all-or-none electrochemical pulse called an action potential is generated, which travels rapidly along the cell's axon, and activates synaptic connections with other cells when it arrives.

What Part of Our Brain Makes Us Human? Brian Christian's book The Most Human Human, newly out in paperback, tells the story of how the author, "a young poet with degrees in computer science and philosophy," set out to win the "Most Human Human" prize in a Turing test weighing natural against artificial intelligence. Along the way, as he prepares to prove to a panel of judges (via an anonymous teletype interface) that he is not a machine, the book provides a sharply reasoned investigation into the nature of thinking. Are we setting ourselves up for failure by competing with machines in their analytical, logical areas of prowess rather than nurturing our own human strengths?

Hierarchical temporal memory Hierarchical temporal memory (HTM) is an online machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex. HTM is a biomimetic model based on the memory-prediction theory of brain function described by Jeff Hawkins in his book On Intelligence. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world. Jeff Hawkins states that HTM does not present any new idea or theory, but combines existing ideas to mimic the neocortex with a simple design that provides a large range of capabilities.

Functional magnetic resonance imaging Researcher checking fMRI images Functional magnetic resonance imaging or functional MRI (fMRI) is a functional neuroimaging procedure using MRI technology that measures brain activity by detecting associated changes in blood flow.[1] This technique relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases. The primary form of fMRI uses the Blood-oxygen-level dependent (BOLD) contrast,[2] discovered by Seiji Ogawa. The procedure is similar to MRI but uses the change in magnetization between oxygen-rich and oxygen-poor blood as its basic measure.

Hierarchical Temporal Memory We've completed a functional (and much better) version of our .NET-based Hierarchical Temporal Memory (HTM) engines (great job Rob). We're also still working on an HTM based robotic behavioral framework (and our 1st quarter goal -- yikes - we're late). Also, we are NOT using Numenta's recently released run-time and/or code... since we're professional .NET consultants/developers, we decided to author our own implementation from initial prototypes authored over the summer of 2006 during an infamous sabbatical -- please don't ask about the "Hammer" stories. I've been feeling that the team has not been in synch in terms of HTM concepts, theory and implementation. We decided to spend the last couple of meetings purely focused on discussions concerning HTMs.

Diffusion MRI Diffusion MRI (or dMRI) is a magnetic resonance imaging (MRI) method which came into existence in the mid-1980s.[1][2][3] It allows the mapping of the diffusion process of molecules, mainly water, in biological tissues, in vivo and non-invasively. Molecular diffusion in tissues is not free, but reflects interactions with many obstacles, such as macromolecules, fibers, membranes, etc. Water molecule diffusion patterns can therefore reveal microscopic details about tissue architecture, either normal or in a diseased state. The first diffusion MRI images of the normal and diseased brain were made public in 1985.[4][5] Since then, diffusion MRI, also referred to as diffusion tensor imaging or DTI (see section below) has been extraordinarily successful. Its main clinical application has been in the study and treatment of neurological disorders, especially for the management of patients with acute stroke.

Hugo de Garis Hugo de Garis (born 1947, Sydney, Australia) was a researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.[1] He has more recently been noted for his belief that a major war between the supporters and opponents of intelligent machines, resulting in billions of deaths, is almost inevitable before the end of the 21st century.[2]:234 He suggests AIs may simply eliminate the human race, and humans would be powerless to stop them because of technological singularity. De Garis originally studied theoretical physics, but he abandoned this field in favour of artificial intelligence.

Brain Atlas - Introduction The central nervous system (CNS) consists of the brain and the spinal cord, immersed in the cerebrospinal fluid (CSF). Weighing about 3 pounds (1.4 kilograms), the brain consists of three main structures: the cerebrum, the cerebellum and the brainstem. Cerebrum - divided into two hemispheres (left and right), each consists of four lobes (frontal, parietal, occipital and temporal). The outer layer of the brain is known as the cerebral cortex or the ‘grey matter’.

Evolvable hardware Evolvable hardware (EH) is a new field about the use of evolutionary algorithms (EA) to create specialized electronics without manual engineering. It brings together reconfigurable hardware, artificial intelligence, fault tolerance and autonomous systems. Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment. Introduction[edit] Each candidate circuit can either be simulated or physically implemented in a reconfigurable device. Electroencephalography Simultaneous video and EEG recording of two guitarists improvising. Electroencephalography (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations resulting from ionic current flows within the neurons of the brain.[1] In clinical contexts, EEG refers to the recording of the brain's spontaneous electrical activity over a short period of time, usually 20–40 minutes, as recorded from multiple electrodes placed on the scalp. Diagnostic applications generally focus on the spectral content of EEG, that is, the type of neural oscillations that can be observed in EEG signals. EEG is most often used to diagnose epilepsy, which causes obvious abnormalities in EEG readings.[2] It is also used to diagnose sleep disorders, coma, encephalopathies, and brain death. History[edit]

Kwabena Boahen argues that the miniaturisation of transistors leads to them becomming more like synapses and less like electrical components: sometimes they don't output a current when they should and sometimes they leak a current when they shouldn't. Chipdesign neeeds a paradigm shift: not accurate central processesing the bottleneck but more connections with fuzy results. Compare this to TED talks about the brain by Henry Markram and others. by kaspervandenberg Dec 22

Related: