Humanoid Robot Learns Language Like a Baby | Wired Science With the help of human instructors, a robot has learned to talk like a human infant, learning the names of simple shapes and colors. “Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms,” wrote computer scientists led by Caroline Lyon of the University of Hertfordshire in a June 13 Public Library of Science One study. Named DeeChee, the robot is an iCub, a three-foot-tall open source humanoid machine designed to resemble a baby. The similarity isn’t merely aesthetic, but has functional purpose: Many researchers think certain cognitive processes are shaped by the bodies in which they occur. A brain in a vat would think and learn very differently than a brain in a body. This field of study is called embodied cognition and in DeeChee’s case applies to learning the building blocks of language, a process that in humans is shaped by an exquisite sensitivity to the frequency of sounds.
One Per Cent: Bot with boyish personality wins biggest Turing test Celeste Biever, deputy news editor Eugene Goostman, a chatbot with the personality of a 13-year-old boy, won the biggest Turing test ever staged, on 23 June, the 100th anniversary of the birth of Alan Turing. Held at Bletchley Park near Milton Keynes, UK, where Turing cracked the Nazi Enigma code during the second world war, the test involved over 150 separate conversations, 30 judges (including myself), 25 hidden humans and five elite, chattering software programs. By contrast, the most famous Turing test - the annual Loebner prize, also held at Bletchley Park this year to honour Turing - typically involves just four human judges and four machines. "With 150 Turing tests conducted, this is the biggest Turing test contest ever," says Huma Shah, a researcher at the University of Reading, UK, who organised the mammoth test. That makes the result more statistically significant than any other previous Turing test, says Eugene's creator Vladimir Veselov based in Raritan, New Jersey.
Google scientists find evidence of machine learning | Cutting Edge Google scientists working in the company's secretive X Labs have made great strides in using computers to simulate the human brain. Best known for inventing self-driving cars and augmented-reality eyewear, the lab created a neural network for machine learning by connecting 16,000 computer processors and then unleashed it on the Internet. Along the way, the network taught itself to recognize cats. While the act of finding cats on the Internet doesn't sound all that challenging, the network's performance exceeded researchers' expectations, doubling its accuracy rate in identifying objects from a list of 20,000 items, according to a New York Times report. To find the cats, the team fed the network thumbnail images chosen at random from more than 10 billion YouTube videos. "We never told it during the training, 'This is a cat,'" Google fellow Jeff Dean told the newspaper.
Using large-scale brain simulations for machine learning and A.I. You probably use machine learning technology dozens of times a day without knowing it—it’s a way of training computers on real-world data, and it enables high-quality speech recognition, practical computer vision, email spam blocking and even self-driving cars. But it’s far from perfect—you’ve probably chuckled at poorly transcribed text, a bad translation or a misidentified image. We believe machine learning could be far more accurate, and that smarter computers could make everyday tasks much easier. So our research team has been working on some new approaches to large-scale machine learning. Today’s machine learning technology takes significant work to adapt to new uses. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos. We’re reporting on these experiments, led by Quoc Le, at ICML this week.
AI designs its own video game - tech - 07 March 2012 Video games designed almost entirely by a computer program herald a new wave of AI creativity Read more: "Better living through video gaming" Have a go at the game designed especially for New Scientist by the AI Angelina: "Space Station Invaders" IT IS never going to compete with the latest iteration of Call of Duty, but then Space Station Invaders is not your typical blockbuster video game. While modern shooters involve hundreds of programmers and cost millions of dollars, this new game is the handiwork of an AI called Angelina. Software that generates video-game artwork, music or even whole levels is not new, but Angelina takes it a step further by creating a simple video game almost entirely from scratch. Angelina creates games using a technique known as cooperative co-evolution. It then combines the species and simulates a human playing the game to see which designs lead to the most fun or interesting results. Combining these simple elements can produce surprisingly nuanced effects.
Inteligencia Artificial: Ramon Llull y el Ars Magna Aunque el término inteligencia artificial fue acuñado por el desaparecido John McCarthy, creador del lenguaje de programación Lisp, junto a Marvin Minsky y Claude Shannon en 1956 durante la Conferencia de Darmouth; desde la antigüedad el ser humano ha estado trazando el camino que le permitiese desarrollar "máquinas inteligentes" y "máquinas pensantes". El primer ejemplo lo encontramos en la Antigua Grecia, con Aristóteles, que intentó describir el funcionamiento racional de la mente, o Ctesibio de Alejandría, con una máquina automática que regulaba el flujo de agua, podemos encontrar los primeros pasos de esa búsqueda de las máquinas pensantes. El siguiente hito lo encontramos en la oscura Edad Media, época en la que se desarrolló uno de los trabajos más curiosos (y adelantados para su época) gracias a Ramon Llull y su obra titulada Ars Magna.
Noam Chomsky on Where Artificial Intelligence Went Wrong An extended conversation with the legendary linguist Graham Gordon Ramsay If one were to rank a list of civilization's greatest and most elusive intellectual challenges, the problem of "decoding" ourselves -- understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome -- would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach. In 1956, the computer scientist John McCarthy coined the term "Artificial Intelligence" (AI) to describe the study of intelligence by implementing its essential features on a computer. Some of McCarthy's colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky, speaking in the symposium, wasn't so enthused. I want to start with a very basic question.
I, Robopsychologist, Part 2: Where Human Brains Far Surpass Computers | The Crux Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter at @AndreaKuszewski. Before you read this post, please see “I, Robopsychologist, Part 1: Why Robots Need Psychologists.” A current trend in AI research involves attempts to replicate a human learning system at the neuronal level—beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. If we’re trying to create AI that mimics humans, both in behavior and learning, then we need to consider how humans actually learn—specifically, how they learn best—when teaching them. Why Is a Human-like Brain So Desirable?
I, Robopsychologist, Part 1: Why Robots Need Psychologists | The Crux Andrea Kuszewski is a behavior therapist and consultant, science writer, and robopsychologist at Syntience in San Francisco. She is interested in creativity, intelligence, and learning, in both humans and machines. Find her on Twitter a @AndreaKuszewski. “My brain is not like a computer.” The day those words were spoken to me marked a significant milestone for both me and the 6-year-old who uttered them. I began my career as a behavior therapist, treating children on the autism spectrum. David heard his mom telling me this, and that quickly became one of his favorite memes. My job was to change this. In the course of therapy, David opened himself up to new ways of looking at problems and deriving solutions. He was no longer operating on a pure input-output or match-to-sample framework, he was learning how to think. The time I spent making humans “less like robots” made me start thinking about how this learning paradigm could be applied to actual robots and thinking machines. References: