background preloader

IBM simulates 530 billon neurons, 100 trillion synapses on supercomputer

IBM simulates 530 billon neurons, 100 trillion synapses on supercomputer
A network of neurosynaptic cores derived from long-distance wiring in the monkey brain: Neuro-synaptic cores are locally clustered into brain-inspired regions, and each core is represented as an individual point along the ring. Arcs are drawn from a source core to a destination core with an edge color defined by the color assigned to the source core. (Credit: IBM) Announced in 2008, DARPA’s SyNAPSE program calls for developing electronic neuromorphic (brain-simulation) machine technology that scales to biological levels, using a cognitive computing architecture with 1010 neurons (10 billion) and 1014 synapses (100 trillion, based on estimates of the number of synapses in the human brain) to develop electronic neuromorphic machine technology that scales to biological levels.” Simulating 10 billion neurons and 100 trillion synapses on most powerful supercomputer Neurosynaptic core (credit: IBM) Two billion neurosynaptic cores DARPA SyNAPSE Phase 0DARPA SyNAPSE Phase 1DARPA SyNAPSE Phase 2 Related:  Neural NetworkA bit about Neuro-computing (science)

DARPA SyNAPSE Program Last updated: Jan 11, 2013 SyNAPSE is a DARPA-funded program to develop electronic neuromorphic machine technology that scales to biological levels. More simply stated, it is an attempt to build a new kind of computer with similar form and function to the mammalian brain. SyNAPSE is a backronym standing for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The ultimate aim is to build an electronic microprocessor system that matches a mammalian brain in function, size, and power consumption. Latest news As of January 2013 the program is currently progressing through phase 2, the third of five phases. Background The following text is taken from the Broad Agency Announcement (BAA) published by DARPA in April 2008 (see the original document): Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous today. Phase 0

Q-learning Q-learning is a model-free reinforcement learning technique. Specifically, Q-learning can be used to find an optimal action-selection policy for any given (finite) Markov decision process (MDP). It works by learning an action-value function that ultimately gives the expected utility of taking a given action in a given state and following the optimal policy thereafter. When such an action-value function is learned, the optimal policy can be constructed by simply selecting the action with the highest value in each state. One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. Additionally, Q-learning can handle problems with stochastic transitions and rewards, without requiring any adaptations. Algorithm[edit] The problem model, the MDP, consists of an agent, states S and a set of actions per state A. , the agent can move from state to state. where is the reward observed after performing in

A Non-Mathematical Introduction to Using Neural Networks The goal of this article is to help you understand what a neural network is, and how it is used. Most people, even non-programmers, have heard of neural networks. There are many science fiction overtones associated with them. And like many things, sci-fi writers have created a vast, but somewhat inaccurate, public idea of what a neural network is. Most laypeople think of neural networks as a sort of artificial brain. Neural networks are one small part of AI. The human brain really should be called a biological neural network (BNN). There are some basic similarities between biological neural networks and artificial neural networks. Like I said, neural networks are designed to accomplish one small task. The task that neural networks accomplish very well is pattern recognition. Figure 1: A Typical Neural Network As you can see, the neural network above is accepting a pattern and returning a pattern. Neural Network Structure Neural networks are made of layers of similar neurons. Conclusion

Collective Intelligence in Neural Networks and Social Networks « 100 Trillion Connections Context for this post: I’m currently working on a social network application that demonstrates the value of connection strength and context for making networks more useful and intelligent. Connection strength and context are currently only rudimentarily and mushily implemented in social network apps. This post describes some of the underlying theory for why connection strength and context are key to next generation social network applications. A recent study of how behavioral decisions are made in the brain makes it clear how important strengths of connections are to the intelligence of networks. “Scientists at the University of Rochester, Washington University in St. Louis, and Baylor College of Medicine have unraveled how the brain manages to process the complex, rapidly changing, and often conflicting sensory signals to make sense of our world. “The answer lies in a simple computation performed by single nerve cells: a weighted average. 1. 2. 3. 4. Like this: Like Loading...

Model Suggests Link between Intelligence and Entropy +Enlarge image A. Wissner-Gross/Harvard Univ. & MIT A. A pendulum that is free to swing through all angles in a plane can be stabilized in the inverted position by sliding the pivot horizontally, in the same way that you can balance a meter stick on your finger. The smallest disks, subjected to causal entropy forces, tend to work in a synchronized fashion to pull down the largest disk, in what the authors present as a primitive example of social cooperation. The second law of thermodynamics—the one that says entropy can only increase—dictates that a complex system always evolves toward greater disorderliness in the way internal components arrange themselves. Entropy measures the number of internal arrangements of a system that result in the same outward appearance. Hoping to firm up such notions, Wissner-Gross teamed up with Cameron Freer of the University of Hawaii at Manoa to propose a “causal path entropy.” –Don Monroe Don Monroe is a freelance science writer in Murray Hill, New Jersey.

Introduction aux Réseaux de Neurones Artificiels Feed Forward Plongeons-nous dans l'univers de la reconnaissance de formes. Plus particulièrement, nous allons nous intéresser à la reconnaissance des chiffres (0, 1, ..., 9). Imaginons un programme qui devrait reconnaître, depuis une image, un chiffre. On présente donc au programme une image d'un "1" manuscrit par exemple et lui doit pouvoir nous dire "c'est un 1". Supposons que les images que l'on montrera au programme soient toutes au format 200x300 pixels. De façon plus générale, un réseau de neurone permet l'approximation d'une fonction. Dans la suite de l'article, on notera un vecteur dont les composantes sont les n informations concernant un exemple donné. Voyons maintenant d'où vient la théorie des réseaux de neurones artificiels. Comment l'homme fait-il pour raisonner, parler, calculer, apprendre... ? Approches adoptée en recherche en Intelligence Artificielle procéder d'abord à l'analyse logique des tâches relevant de la cognition humaine et tenter de les reconstituer par programme. III-1.

Learning and neural networks Artificial Intelligence: History of AI | Intelligent Agents | Search techniques | Constraint Satisfaction | Knowledge Representation and Reasoning | Logical Inference | Reasoning under Uncertainty | Decision Making | Learning and Neural Networks | Bots An Overview of Neural Networks[edit] The Perceptron and Backpropagation Neural Network Learning[edit] Single Layer Perceptrons[edit] A Perceptron is a type of Feedforward neural network which is commonly used in Artificial Intelligence for a wide range of classification and prediction problems. We can classify people in this problem using a single layer perceptron A perceptron learns by a trial and error like method. To summarize[edit] The neural network starts out kind of dumb, but we can tell how wrong it is and based on how far off its answers are, we adjust the weights a little to make it more correct the next time. . Note: The difference between and is that is what you want the network to produce while is what it actually outputs. and

Mathematicians help to unlock brain function Mathematicians from Queen Mary, University of London will bring researchers one-step closer to understanding how the structure of the brain relates to its function in two recently published studies. Publishing in Physical Review Letters the researchers from the Complex Networks group at Queen Mary's School of Mathematics describe how different areas in the brain can have an association despite a lack of direct interaction. The team, in collaboration with researchers in Barcelona, Pamplona and Paris, combined two different human brain networks - one that maps all the physical connections among brain areas known as the backbone network, and another that reports the activity of different regions as blood flow changes, known as the functional network. Lead author Vincenzo Nicosia, said "We don't fully understand how the human brain works. "The research is important as it's the first time that a sharp transition in the growth of a neural network has ever been observed," added Dr Nicosia.

Les architectures neuromorphiques Les ordinateurs sont vraiment loin d'être les seuls systèmes capables de traiter de l'information. Les automates mécaniques furent les premiers à en être capables : les ancêtres des calculettes étaient bel et bien des automates basés sur des pièces mécaniques, et n'étaient pas programmables. Par la suite, ces automates furent remplacés par les circuits électroniques analogiques et numériques non-programmables. La suite logique fût l'introduction de la programmation : l'informatique était née. De nos jours, de nouveaux types de circuits de traitement de l’information ont vu le jour. On peut citer des circuits électroniques, qui sont souvent reconfigurables, mais pas programmables. Dans ce qui va suivre, nous allons voir : Réseaux de neurones matériels Ces architectures s'inspirent fortement du cerveau humain, et du fonctionnement du système nerveux. Simulation matérielle de systèmes nerveux Au départ, les neurones simulés étaient simples, et cela suffisait. Les accélérateurs matériels

neuralview [OProj - Open Source Software] Bitbucket is a code hosting site with unlimited public and private repositories. We're also free for small teams! Sign up for freeClose NeuralView is a graphical interface for FANN 1, making possible to graphically design, train, and test artificial neural networks. Later Terminator: We’re Nowhere Near Artificial Brains | The Crux I can feel it in the air, so thick I can taste it. Can you? It’s the we’re-going-to-build-an-artificial-brain-at-any-moment feeling. It’s exuded into the atmosphere from news media plumes (“IBM Aims to Build Artificial Human Brain Within 10 Years”) and science-fiction movie fountains…and also from science research itself, including projects like Blue Brain and IBM’s SyNAPSE. Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. Now, I’m as romantic as the next scientist (as evidence, see my earlier post on science monk Carl Sagan), but even I carry around a jug of cold water for cases like this. The Worm in the Pass In the story about the Spartans at the Battle of Thermopylae, 300 soldiers prevent a million-man army from making their way through a narrow mountain pass. As they say, 300 is a tragedy; 300 billion is a statistic. Big-Brained Dummies Blurry Joints Instincts

NeuroSolutions: What is a Neural Network? What is a Neural Network? A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform "intelligent" tasks similar to those performed by the human brain. Neural networks resemble the human brain in the following two ways: A neural network acquires knowledge through learning. A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights. The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modeled. The most common neural network model is the multilayer perceptron (MLP). Block diagram of a two hidden layer multiplayer perceptron (MLP).