background preloader

Using large-scale brain simulations for machine learning and A.I.

Using large-scale brain simulations for machine learning and A.I.
You probably use machine learning technology dozens of times a day without knowing it—it’s a way of training computers on real-world data, and it enables high-quality speech recognition, practical computer vision, email spam blocking and even self-driving cars. But it’s far from perfect—you’ve probably chuckled at poorly transcribed text, a bad translation or a misidentified image. We believe machine learning could be far more accurate, and that smarter computers could make everyday tasks much easier. So our research team has been working on some new approaches to large-scale machine learning. Today’s machine learning technology takes significant work to adapt to new uses. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos. We’re reporting on these experiments, led by Quoc Le, at ICML this week.

Google Hires Brains that Helped Supercharge Machine Learning | Wired Enterprise Geoffrey Hinton (right) Alex Krizhevsky, and Ilya Sutskever (left) will do machine learning work at Google. Photo: U of T Google has hired the man who showed how to make computers learn much like the human brain. His name is Geoffrey Hinton, and on Tuesday, Google said that it had hired him along with two of his University of Toronto graduate students — Alex Krizhevsky and Ilya Sutskever. Google paid an undisclosed sum to buy Hinton’s company, DNNresearch. Back in the 1980s, Hinton kicked off research into neural networks, a field of machine learning where programmers can build machine learning models that help them to sift through vast quantities of data and put together patterns, much like the human brain. “Deep learning, pioneered by Hinton, has revolutionized language understanding and language translation,” said Ed Lazowska, a computer science professor at the University of Washington. You can watch Rick Rashid’s cool demo here:

'Chinese Google' Opens Artificial-Intelligence Lab in Silicon Valley | Wired Enterprise Kai Yu, of the Chinese search giant Baidu, discusses “deep learning” inside the company’s new Silicon Valley outpost. Photo: Alex Washburn / Wired It doesn’t look like much. The brick office building sits next to a strip mall in Cupertino, California, about an hour south of San Francisco, and if you walk inside, you’ll find a California state flag and a cardboard cutout of R2-D2 and plenty of Christmas decorations — even though we’re well into April. But there are big plans for this building. It’s where Baidu — “the Google of China” — hopes to create the future. In late January, word arrived that the Chinese search giant was setting up a research lab dedicated to “deep learning” — an emerging computer science field that seeks to mimic the human brain with hardware and software — and as it turns out, this lab includes an operation here in Silicon Valley, not far from Apple headquarters, in addition to a facility back in China. Baidu calls its lab The Institute of Deep Learning, or IDL.

The man behind the Google brain: Andrew Ng and the quest for the new AI Artificial intelligence (credit: Alejandro Zorrilal Cruz/Wikimedia Commons) There’s a theory that human intelligence stems from a single algorithm. The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks. About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI, Wired reports. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.” [...] More

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI | Wired Enterprise Meanwhile, engineers in Japan are building artificial neural nets to control robots. And together with scientists from the European Union and Israel, neuroscientist Henry Markman is hoping to recreate a human brain inside a supercomputer, using data from thousands of real experiments. The rub is that we still don't completely understand how the brain works, but scientists are pushing forward in this as well. The Chinese are working on what they call the Brainnetdome, described as a new atlas of the brain, and in the U.S., the Era of Big Neuroscience is unfolding with ambitious, multidisciplinary projects like President Obama’s newly announced (and much criticized) Brain Research Through Advancing Innovative Neurotechnologies Initiative -- BRAIN for short. The BRAIN planning committee had its first meeting this past Sunday, with more meetings scheduled for this week. "That’s where we’re going to start to learn about the tricks that biology uses. What the World Wants

Network of brain cells models smart power grid A network of hundreds or thousands of dissociated mammalian cortical cells (neurons and glia) are cultured on a transparent multi-electrode array. Activity is recorded extracellularly to control the behavior of an artificial animal (the Animat) within a simulated environment. Sensory input to the Animat is translated into patterns of electrical stimuli sent back into the network. A team of neuroscientists and engineers at Clemson University is using neurons grown in a dish to control simulated power grids. The researchers hope that studying how neural networks integrate and respond to complex information will inspire new methods for managing the country’s ever-changing power supply and demand. “The brain is one of the most robust computational platforms that exists,” says Ganesh Kumar Venayagamoorthy, Ph.D., director of the Real-Time Power and Intelligent Systems Laboratory y. Unfortunately, the grid’s aging infrastructure wasn’t built to handle today’s ever-increasing demand.

Frequently Asked Questions NeuroSolutions Frequently Asked Questions (FAQ) Questions about the Technology What is a neural network? Questions about the Technology in NeuroSolutions What are some of the types of neural networks that I can build with NeuroSolutions? Q. A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. A neural network acquires knowledge through learning. The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modeled. The most common neural network model is the multilayer perceptron (MLP). Block diagram of a two hidden layer multiplayer perceptron (MLP). The MLP and many other neural networks learn using an algorithm called backpropagation. Demonstration of a neural network learning to model the exclusive-or (Xor) data. Return to Topic List Q. A. Q. A. Q. A. Q. A. Q. A. Q. A. Q.

Nanowire-memristor networks emulate brain functions Image of a nanowire network (credit: CRANN) A Trinity College Dublin chemistry professor has been awarded a €2.5 million ($3.2 million) research grant by the European Research Council (ERC) to continue research into nanowire networks. Professor John Boland, Director of CRANN, a nanoscience institute, and a Professor in the School of Chemistry, said the research could result in computer networks that mimic the functions of the human brain and vastly improve on current computer capabilities such as facial recognition. Nanowires, made of materials such as copper or silicon, are just a few atoms thick and can be readily engineered into networks. Boland has discovered that exposing a random network of nanowires to stimuli like electricity, light and chemicals generates a chemical reaction at the junctions of the nanowires, corresponding to synapses in the brain. The project combines work in nanowires and memristors, which can “remember” a charge.

Canadian scientists create a functioning, virtual brain Chris Eliasmith has spent years contemplating how to build a brain. He is about to publish a book with instructions, which describes the grey matter’s architecture and how the different components interact. So Eliasmith’s team built Spaun, which was billed Thursday as “the world’s largest simulation of a functioning brain.” Spaun can recognize numbers, remember lists and write them down. It even passes some basic aspects of an IQ test, the team reports in the journal Science. Several labs are working on large models of the brain– including the multi-million-dollar Blue Brain Project in Europe – but these can’t see, remember or control limbs, says Eliasmith. “Right now very large-scale models of the brain don’t do anything,” he said in an interview. His Waterloo team took a different approach, using computers to simulate what goes on inside the brain, similar to the way aircraft simulators mimic flight. The clever creation is the first to bridge what Eliasmith calls the “brain-behaviour gap.”

Spaun, the new human brain simulator, can carry out tasks (w/ video) (Phys.org)—One of the challenges of understanding the complex behavior of animals is to relate the behavior to the complex processes occurring within the brain. So far, neural models have not been able to bridge this gap, but a new software model, Spaun, goes some way to addressing this problem. The Semantic Pointer Architecture Unified Network (Spaun) is a computer model of the human brain built by Professor Chris Eliasmith and colleagues of the University of Waterloo in Canada. It comprises around two and a half million virtual neurons organized into functional groups rather like real neurons in regions of the human brain associated with vision, short-term memory, and so on. (The human brain has roughly 100 billion neurons.) Spaun is presented with a sequence of visual images in eight separate tasks. These tasks are simple, but they capture many features of neuroanatomy and physiology, including abilities to perceive, recognize and carry out required behaviors.

Related: