background preloader

Using large-scale brain simulations for machine learning and A.I.

Using large-scale brain simulations for machine learning and A.I.
You probably use machine learning technology dozens of times a day without knowing it—it’s a way of training computers on real-world data, and it enables high-quality speech recognition, practical computer vision, email spam blocking and even self-driving cars. But it’s far from perfect—you’ve probably chuckled at poorly transcribed text, a bad translation or a misidentified image. We believe machine learning could be far more accurate, and that smarter computers could make everyday tasks much easier. So our research team has been working on some new approaches to large-scale machine learning. Today’s machine learning technology takes significant work to adapt to new uses. For example, say we’re trying to build a system that can distinguish between pictures of cars and motorcycles. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos.

http://googleblog.blogspot.com/2012/06/using-large-scale-brain-simulations-for.html

Related:  A.I.AI Bot

Google scientists find evidence of machine learning Google scientists working in the company's secretive X Labs have made great strides in using computers to simulate the human brain. Best known for inventing self-driving cars and augmented-reality eyewear, the lab created a neural network for machine learning by connecting 16,000 computer processors and then unleashed it on the Internet. Along the way, the network taught itself to recognize cats. While the act of finding cats on the Internet doesn't sound all that challenging, the network's performance exceeded researchers' expectations, doubling its accuracy rate in identifying objects from a list of 20,000 items, according to a New York Times report. To find the cats, the team fed the network thumbnail images chosen at random from more than 10 billion YouTube videos. The results appeared to support biologists' theories that suggest that neurons in the brain are trained to identify specific objects.

Humanoid Robot Learns Language Like a Baby With the help of human instructors, a robot has learned to talk like a human infant, learning the names of simple shapes and colors. “Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms,” wrote computer scientists led by Caroline Lyon of the University of Hertfordshire in a June 13 Public Library of Science One study. Named DeeChee, the robot is an iCub, a three-foot-tall open source humanoid machine designed to resemble a baby.

One Per Cent: Bot with boyish personality wins biggest Turing test Celeste Biever, deputy news editor Eugene Goostman, a chatbot with the personality of a 13-year-old boy, won the biggest Turing test ever staged, on 23 June, the 100th anniversary of the birth of Alan Turing. Held at Bletchley Park near Milton Keynes, UK, where Turing cracked the Nazi Enigma code during the second world war, the test involved over 150 separate conversations, 30 judges (including myself), 25 hidden humans and five elite, chattering software programs. By contrast, the most famous Turing test - the annual Loebner prize, also held at Bletchley Park this year to honour Turing - typically involves just four human judges and four machines. "With 150 Turing tests conducted, this is the biggest Turing test contest ever," says Huma Shah, a researcher at the University of Reading, UK, who organised the mammoth test. That makes the result more statistically significant than any other previous Turing test, says Eugene's creator Vladimir Veselov based in Raritan, New Jersey.

Developing artificial intelligence systems that can interpret images Like many kids, Antonio Torralba began playing around with computers when he was 13 years old. Unlike many of his friends, though, he was not playing video games, but writing his own artificial intelligence (AI) programs. Growing up on the island of Majorca, off the coast of Spain, Torralba spent his teenage years designing simple algorithms to recognize handwritten numbers, or to spot the verb and noun in a sentence. But he was perhaps most proud of a program that could show people how the night sky would look from a particular direction. “Or you could move to another planet, and it would tell you how the stars would look from there,” he says.

AI designs its own video game - tech - 07 March 2012 Video games designed almost entirely by a computer program herald a new wave of AI creativity Read more: "Better living through video gaming" Have a go at the game designed especially for New Scientist by the AI Angelina: "Space Station Invaders" IT IS never going to compete with the latest iteration of Call of Duty, but then Space Station Invaders is not your typical blockbuster video game. How the Cleverbot Computer Chats Like a Human Last week, an artificial intelligence computer named Cleverbot stunned the world with a stellar performance on the Turing Test — an IQ test of sorts for "chatbots," or conversational robots. Cleverbot, it seems, can carry on a conversation as well as any human can. In the Turing Test — conceived by British computer scientist Alan Turing in the 1950s — chatbots engage in typed conversations with humans, and try to fool them into thinking they're humans, too. (As a control, some users unknowingly chat with humans pretending to be chatbots.)

Virtual robot links body to numbers just like humans - tech - 11 November 2011 Video: See a virtual robot mimic a human baby A virtual robot has acquired a cognitive wrinkle common in people – further evidence that computers need bodies if they're ever going to think like us Read more: "Squishybots: Soft, bendy and smarter than ever ONE of the many curious habits of the human brain is that we tend to associate small numbers with the left side of our body and large numbers with our right. Now a virtual robot embedded in a synthetic world has acquired the quirk. This is helping untangle the puzzle of how even highly abstract concepts such as numbers might be rooted in our physical interactions with the world. Noam Chomsky on Where Artificial Intelligence Went Wrong An extended conversation with the legendary linguist Graham Gordon Ramsay If one were to rank a list of civilization's greatest and most elusive intellectual challenges, the problem of "decoding" ourselves -- understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome -- would surely be at the top.

The Little Thoughts of Thinking Machines Next: About this document John McCarthy Computer Science Department Stanford University Stanford, CA 94305 jmc@cs.stanford.edu When we interact with computers and other machines, we often use language ordinarily used for talking about people.

Related: