background preloader

Scientists Create Artificial Brain With 2.3 Million Simulated Neurons

Scientists Create Artificial Brain With 2.3 Million Simulated Neurons
Scientists Create Artificial Brain With 2.3 Million Simulated Neurons Another computer is setting its wits to perform human tasks. But this computer is different. Instead of the tour de force processing of Deep Blue or Watson’s four terabytes of facts of questionable utility, Spaun attempts to play by the same rules as the human brain to figure things out. Spaunstands for Semantic Pointer Architecture: Unified Network. Will AI brains of the future look more like Watson or Spaun? And its performance was similar to that of a human brain. The important thing here is not how well Spaun performed on the tasks – your average computer could find ways to perform much better than Spaun. Chris Eliasmith, from the University of Waterlook in Ontario, Canada and lead author of the study is happy with his cognitive creation. Watch Spaun work through its tasks in the following video. One thing Spaun can’t do is perform tasks in realtime.

Spaun, the new human brain simulator, can carry out tasks (w/ video) (—One of the challenges of understanding the complex behavior of animals is to relate the behavior to the complex processes occurring within the brain. So far, neural models have not been able to bridge this gap, but a new software model, Spaun, goes some way to addressing this problem. The Semantic Pointer Architecture Unified Network (Spaun) is a computer model of the human brain built by Professor Chris Eliasmith and colleagues of the University of Waterloo in Canada. It comprises around two and a half million virtual neurons organized into functional groups rather like real neurons in regions of the human brain associated with vision, short-term memory, and so on. Spaun is presented with a sequence of visual images in eight separate tasks. These tasks are simple, but they capture many features of neuroanatomy and physiology, including abilities to perceive, recognize and carry out required behaviors. The most surprising feature about Spaun, according to Prof.

How to build a brain | Nengo Introduction The book 'How to build a brain' (amazon link) from Oxford University Press came out in May 2013. It exploits the Neural Engineering Framework (NEF) to develop the Semantic Pointer Architecture (SPA) for cognitive modelling. The book uses Nengo to explain and demonstrate many of the central concepts for these frameworks. This section of the website supports the book by providing links to models, demos, videos, and tutorials mentioned in the book, which are on this website. Submenus at left will take you to content related to specific chapters. If you're looking for information specifically on Spaun (which is in chapter 7 of the book), please see our Science paper, and our videos of the model in action. The Semantic Pointer Architecture (SPA) Briefly, the semantic pointer hypothesis states: Higher-level cognitive functions in biological systems are made possible by semantic pointers. An example All of the control like steps (e.g.

Canadian scientists create a functioning, virtual brain Chris Eliasmith has spent years contemplating how to build a brain. He is about to publish a book with instructions, which describes the grey matter’s architecture and how the different components interact. So Eliasmith’s team built Spaun, which was billed Thursday as “the world’s largest simulation of a functioning brain.” Spaun can recognize numbers, remember lists and write them down. It even passes some basic aspects of an IQ test, the team reports in the journal Science. Several labs are working on large models of the brain– including the multi-million-dollar Blue Brain Project in Europe – but these can’t see, remember or control limbs, says Eliasmith. “Right now very large-scale models of the brain don’t do anything,” he said in an interview. His Waterloo team took a different approach, using computers to simulate what goes on inside the brain, similar to the way aircraft simulators mimic flight. The clever creation is the first to bridge what Eliasmith calls the “brain-behaviour gap.”

A Worm's Mind In A Lego Body Take the connectome of a worm and transplant it as software in a Lego Mindstorms EV3 robot - what happens next? It is a deep and long standing philosophical question. Are we just the sum of our neural networks. Of course, if you work in AI you take the answer mostly for granted, but until someone builds a human brain and switches it on we really don't have a concrete example of the principle in action. KDS444, modified by Nnemo The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. The model is accurate in its connections and makes use of UDP packets to fire neurons. The software works with sensors and effectors provided by a simple LEGO robot. The same idea is applied to the 95 motor neurons but these are mapped from the two rows of muscles on the left and right to the left and right motors on the robot. And the result? It is claimed that the robot behaved in ways that are similar to observed C. elegans. Watch the video to see it in action. Is it alive?

Is it time to move past the idea that our brain is like a computer? Is it time to move past the idea that analogies are perfect? No, it's not. Because nobody except insufferable pedants treat analogies like they're somehow either perfect or irredeemably broken. The brain is like a computer in many ways. There are ways that computers and brains are different. ‘Rain Man’-like brains mapped with network analysis The 3D structural connectome of seven adults without a corpus callosum, a top genetic cause of autism. The larger, redder circles represent the hubs of the whole-brain network — the regions with the greatest number of connections to other regions. (Credit: Julia P. Owen et al.) Researchers at UC San Francisco and UC Berkeley have mapped the three-dimensional global connections within the brains of seven adults who have genetic malformations that leave them without the corpus callosum, which connects the left and right sides of the brain. These “structural connectome” maps, which combine hospital MRIs with the mathematical tool known as network analysis, reveal new details about the condition known as agenesis of the corpus callosum, one of the top genetic causes of autism. Understanding how brain connectivity varies from person to person may help researchers identify imaging biomarkers for autism to help diagnose it and manage care for individuals.

Nanowire-memristor networks emulate brain functions Image of a nanowire network (credit: CRANN) A Trinity College Dublin chemistry professor has been awarded a €2.5 million ($3.2 million) research grant by the European Research Council (ERC) to continue research into nanowire networks. Professor John Boland, Director of CRANN, a nanoscience institute, and a Professor in the School of Chemistry, said the research could result in computer networks that mimic the functions of the human brain and vastly improve on current computer capabilities such as facial recognition. Nanowires, made of materials such as copper or silicon, are just a few atoms thick and can be readily engineered into networks. Boland has discovered that exposing a random network of nanowires to stimuli like electricity, light and chemicals generates a chemical reaction at the junctions of the nanowires, corresponding to synapses in the brain. The project combines work in nanowires and memristors, which can “remember” a charge.

Frequently Asked Questions NeuroSolutions Frequently Asked Questions (FAQ) Questions about the Technology What is a neural network?What kind of real-world problems can neural networks solve?What are genetic algorithms?How can genetic algorithms be used to improve neural networks? Questions about the Technology in NeuroSolutions What are some of the types of neural networks that I can build with NeuroSolutions? Q. A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. A neural network acquires knowledge through learning. The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modeled. The most common neural network model is the multilayer perceptron (MLP). Block diagram of a two hidden layer multiplayer perceptron (MLP). Demonstration of a neural network learning to model the exclusive-or (Xor) data. Q.

Network of brain cells models smart power grid A network of hundreds or thousands of dissociated mammalian cortical cells (neurons and glia) are cultured on a transparent multi-electrode array. Activity is recorded extracellularly to control the behavior of an artificial animal (the Animat) within a simulated environment. Sensory input to the Animat is translated into patterns of electrical stimuli sent back into the network. (Credit: Thomas B. Demarse et al./Autonomous Robots) A team of neuroscientists and engineers at Clemson University is using neurons grown in a dish to control simulated power grids. The researchers hope that studying how neural networks integrate and respond to complex information will inspire new methods for managing the country’s ever-changing power supply and demand. “The brain is one of the most robust computational platforms that exists,” says Ganesh Kumar Venayagamoorthy, Ph.D., director of the Real-Time Power and Intelligent Systems Laboratory y. A smarter electric power grid Role of the brain

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI | Wired Enterprise Meanwhile, engineers in Japan are building artificial neural nets to control robots. And together with scientists from the European Union and Israel, neuroscientist Henry Markman is hoping to recreate a human brain inside a supercomputer, using data from thousands of real experiments. The rub is that we still don't completely understand how the brain works, but scientists are pushing forward in this as well. The Chinese are working on what they call the Brainnetdome, described as a new atlas of the brain, and in the U.S., the Era of Big Neuroscience is unfolding with ambitious, multidisciplinary projects like President Obama’s newly announced (and much criticized) Brain Research Through Advancing Innovative Neurotechnologies Initiative -- BRAIN for short. The BRAIN planning committee had its first meeting this past Sunday, with more meetings scheduled for this week. "That’s where we’re going to start to learn about the tricks that biology uses. What the World Wants

'Chinese Google' Opens Artificial-Intelligence Lab in Silicon Valley | Wired Enterprise Kai Yu, of the Chinese search giant Baidu, discusses “deep learning” inside the company’s new Silicon Valley outpost. Photo: Alex Washburn / Wired It doesn’t look like much. The brick office building sits next to a strip mall in Cupertino, California, about an hour south of San Francisco, and if you walk inside, you’ll find a California state flag and a cardboard cutout of R2-D2 and plenty of Christmas decorations — even though we’re well into April. But there are big plans for this building. In late January, word arrived that the Chinese search giant was setting up a research lab dedicated to “deep learning” — an emerging computer science field that seeks to mimic the human brain with hardware and software — and as it turns out, this lab includes an operation here in Silicon Valley, not far from Apple headquarters, in addition to a facility back in China. Baidu calls its lab The Institute of Deep Learning, or IDL. In the ’90s and onto the 2000s, deep learning research was at a low ebb.

Google Hires Brains that Helped Supercharge Machine Learning | Wired Enterprise Geoffrey Hinton (right) Alex Krizhevsky, and Ilya Sutskever (left) will do machine learning work at Google. Photo: U of T Google has hired the man who showed how to make computers learn much like the human brain. His name is Geoffrey Hinton, and on Tuesday, Google said that it had hired him along with two of his University of Toronto graduate students — Alex Krizhevsky and Ilya Sutskever. Their job: to help Google make sense of the growing mountains of data it is indexing and to improve products that already use machine learning — products such as Android voice search. Google paid an undisclosed sum to buy Hinton’s company, DNNresearch. Back in the 1980s, Hinton kicked off research into neural networks, a field of machine learning where programmers can build machine learning models that help them to sift through vast quantities of data and put together patterns, much like the human brain. You can watch Rick Rashid’s cool demo here:

Using large-scale brain simulations for machine learning and A.I. You probably use machine learning technology dozens of times a day without knowing it—it’s a way of training computers on real-world data, and it enables high-quality speech recognition, practical computer vision, email spam blocking and even self-driving cars. But it’s far from perfect—you’ve probably chuckled at poorly transcribed text, a bad translation or a misidentified image. We believe machine learning could be far more accurate, and that smarter computers could make everyday tasks much easier. So our research team has been working on some new approaches to large-scale machine learning. Today’s machine learning technology takes significant work to adapt to new uses. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos. We’re reporting on these experiments, led by Quoc Le, at ICML this week.

The man behind the Google brain: Andrew Ng and the quest for the new AI Artificial intelligence (credit: Alejandro Zorrilal Cruz/Wikimedia Commons) There’s a theory that human intelligence stems from a single algorithm. The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks. About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI, Wired reports. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.” [...] More