background preloader

Using large-scale brain simulations for machine learning and A.I.

Using large-scale brain simulations for machine learning and A.I.
You probably use machine learning technology dozens of times a day without knowing it—it’s a way of training computers on real-world data, and it enables high-quality speech recognition, practical computer vision, email spam blocking and even self-driving cars. But it’s far from perfect—you’ve probably chuckled at poorly transcribed text, a bad translation or a misidentified image. We believe machine learning could be far more accurate, and that smarter computers could make everyday tasks much easier. So our research team has been working on some new approaches to large-scale machine learning. Today’s machine learning technology takes significant work to adapt to new uses. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos. We’re reporting on these experiments, led by Quoc Le, at ICML this week. Related:  A.I.AI Bot

Google scientists find evidence of machine learning | Cutting Edge Google scientists working in the company's secretive X Labs have made great strides in using computers to simulate the human brain. Best known for inventing self-driving cars and augmented-reality eyewear, the lab created a neural network for machine learning by connecting 16,000 computer processors and then unleashed it on the Internet. Along the way, the network taught itself to recognize cats. While the act of finding cats on the Internet doesn't sound all that challenging, the network's performance exceeded researchers' expectations, doubling its accuracy rate in identifying objects from a list of 20,000 items, according to a New York Times report. To find the cats, the team fed the network thumbnail images chosen at random from more than 10 billion YouTube videos. "We never told it during the training, 'This is a cat,'" Google fellow Jeff Dean told the newspaper.

Launching Google +1 Recommendations Across the Web UPDATE (7/9/12): After a few productive weeks in platform preview, we're rolling out +1 recommendations to all users today. Thanks for your feedback at Google I/O, in the discussion forums, and on our Google+ page. We're always eager to hear from you -- so keep it coming. Working on +1, we often hear people say they want to see more of what their friends recommend. For example, when I go the the Chrome Web Store and look at +1 recommendations on the Gmail app, I see related apps and recommendations from friends. To keep these recommendations more relevant and on-topic, they will always refer to pages on the same domain or subdomain as the +1 button. If you’ve already added the +1 button to your site, there’s nothing more you need to do. If you want to see recommendations today, sign up for the developer preview group and tell us what you think. Join the conversation on Google+.

UFLDL Tutorial - Ufldl From Ufldl Description: This tutorial will teach you the main ideas of Unsupervised Feature Learning and Deep Learning. By working through it, you will also get to implement several feature learning/deep learning algorithms, get to see them work for yourself, and learn how to apply/adapt these ideas to new problems. This tutorial assumes a basic knowledge of machine learning (specifically, familiarity with the ideas of supervised learning, logistic regression, gradient descent). Sparse Autoencoder Vectorized implementation Preprocessing: PCA and Whitening Softmax Regression Self-Taught Learning and Unsupervised Feature Learning Building Deep Networks for Classification Linear Decoders with Autoencoders Working with Large Images Note: The sections above this line are stable. Miscellaneous Miscellaneous Topics Advanced Topics: Sparse Coding ICA Style Models Others Material contributed by: Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen

Humanoid Robot Learns Language Like a Baby | Science With the help of human instructors, a robot has learned to talk like a human infant, learning the names of simple shapes and colors. “Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms,” wrote computer scientists led by Caroline Lyon of the University of Hertfordshire in a June 13 Public Library of Science One study. Named DeeChee, the robot is an iCub, a three-foot-tall open source humanoid machine designed to resemble a baby. The similarity isn’t merely aesthetic, but has functional purpose: Many researchers think certain cognitive processes are shaped by the bodies in which they occur. A brain in a vat would think and learn very differently than a brain in a body. This field of study is called embodied cognition and in DeeChee’s case applies to learning the building blocks of language, a process that in humans is shaped by an exquisite sensitivity to the frequency of sounds.

One Per Cent: Bot with boyish personality wins biggest Turing test Celeste Biever, deputy news editor Eugene Goostman, a chatbot with the personality of a 13-year-old boy, won the biggest Turing test ever staged, on 23 June, the 100th anniversary of the birth of Alan Turing. Held at Bletchley Park near Milton Keynes, UK, where Turing cracked the Nazi Enigma code during the second world war, the test involved over 150 separate conversations, 30 judges (including myself), 25 hidden humans and five elite, chattering software programs. By contrast, the most famous Turing test - the annual Loebner prize, also held at Bletchley Park this year to honour Turing - typically involves just four human judges and four machines. "With 150 Turing tests conducted, this is the biggest Turing test contest ever," says Huma Shah, a researcher at the University of Reading, UK, who organised the mammoth test. That makes the result more statistically significant than any other previous Turing test, says Eugene's creator Vladimir Veselov based in Raritan, New Jersey.

Supporting entrepreneurship in France at Le Camping Entrepreneurs all around the world are building technologies that empower their communities and address both local and global audiences. Last week, a team of Googlers from 10 countries gathered in Paris to spend time with entrepreneurs and startups at Le Camping, an accelerator program that’s part of Silicon Sentier, an association focused on supporting promising digital projects in the Ile de France region. We celebrated the results of the first two seasons of the program and welcomed the new startups for season three. Le Camping’s program selects 12 new startups each season (one season lasts six months). They “camp” in what used to be the facilities of the French Stock Exchange, symbolizing the bridge between the old and the new economy. We’ve already seen great success from the program. This is just one of our efforts to support entrepreneurs in France. We believe that the Internet and entrepreneurship are key drivers of economic development.

Deep Learning - Community Developing artificial intelligence systems that can interpret images Like many kids, Antonio Torralba began playing around with computers when he was 13 years old. Unlike many of his friends, though, he was not playing video games, but writing his own artificial intelligence (AI) programs. Growing up on the island of Majorca, off the coast of Spain, Torralba spent his teenage years designing simple algorithms to recognize handwritten numbers, or to spot the verb and noun in a sentence. Today, Torralba is a tenured associate professor of electrical engineering and computer science at MIT, and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL), where he develops AI systems that can interpret images to understand what scenes and objects they contain. Torralba first became interested in computer vision while working on his PhD at the University of Grenoble in France. At that time, most computer-vision researchers were occupied with facial detection and recognition, treating the rest of an image almost as a nuisance.

AI designs its own video game - tech - 07 March 2012 Video games designed almost entirely by a computer program herald a new wave of AI creativity Read more: "Better living through video gaming" Have a go at the game designed especially for New Scientist by the AI Angelina: "Space Station Invaders" IT IS never going to compete with the latest iteration of Call of Duty, but then Space Station Invaders is not your typical blockbuster video game. While modern shooters involve hundreds of programmers and cost millions of dollars, this new game is the handiwork of an AI called Angelina. Software that generates video-game artwork, music or even whole levels is not new, but Angelina takes it a step further by creating a simple video game almost entirely from scratch. Angelina creates games using a technique known as cooperative co-evolution. It then combines the species and simulates a human playing the game to see which designs lead to the most fun or interesting results. Combining these simple elements can produce surprisingly nuanced effects.

Related: