
Human cues used to improve computer user-friendliness Lijun Yin wants computers to understand inputs from humans that go beyond the traditional keyboard and mouse. "Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist. "Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does." Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. To some extent, that's already possible. Yin says the next step would be enabling the computer to recognize a user's emotional state. "Computers only understand zeroes and ones," Yin says. He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Imagine if a computer could understand when people are in pain.
Slashdot: News for nerds, stuff that matters Artificial intelligence creeps nearer via bee algorithms and crowdsourcing Yet crowdsourcing can be extremely effective, as MIT's Riley Crane showed in answering DARPA's challenge to find 10 weather balloons moored around the US. The MIT team used social networks and a pyramid of financial incentives to recruit volunteers, their friends and their friends of friends to report sightings - and won by finding all 10 within nine hours. "Not all hard problems can be solved by aggregation," he said. Unlike movie recommendations or Google Instant, problems like the balloon challenge require "coordination or collaboration". "This is a toy problem," he said, "but it's still starting to show some of the possibilities of what we're going to be able to do in future." Other interesting approaches included the MIT Media Lab's Alexander Wissner-Gross, who argues that if a planet-scale superhuman intelligence emerges it will most likely be from either the quantitative finance or advertising industries. Exactly how much AI should resemble humans is a long-running debate.
Programmed DNA Robot Goes Where Scientists Tell It | Nanotechnology | LiveScience A tiny robot made from strands of DNA could pave the way for mini-machines that can dive into the human body to perform surgeries, among other futuristic applications. While DNA-based robots have been made before, this latest real-life micromachine is the first one that researchers have successfully programmed to follow instructions on where to move. Once assembled, the robot can take multiple steps without any outside help, according to lead researcher Andrew Turberfield, a professor at the University of Oxford. "Turberfield's group has figured out a beautiful way to automate the movement of a strand of DNA along a track," said William Sherman, an associate scientist at Brookhaven National Laboratory, who was not involved in the study. DNA bots When thinking about robots, many of us picture humanlike machines with metal moving parts, like Rosie from "The Jetsons." Enter the DNA molecule. Takes instruction well
Basic Questions Next: Branches of AI Up: WHAT IS ARTIFICIAL INTELLIGENCE? Previous: WHAT IS ARTIFICIAL INTELLIGENCE? Q. What is artificial intelligence? A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. Q. A. Q. A. Q. A. Q. A. Q. A. However, some of the problems on IQ tests are useful challenges for AI. Q. Arthur R. Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse. Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently. Q. A. Q. A. Q. A. The Turing test is a one-sided test. Q. A. Q. A. Q. A. Q. A. Q. A. Q. A. Q. A. Q. A. Q. A. Q.
ePlayer - Progressive VOD Researchers Give Robots the Capability for Deceptive Behavior Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner look on as the black robot deceives the red robot into thinking it is hiding down the left corridor. (Click image for high-resolution version. Credit: Gary Meek) A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing. The results of robot experiments and theoretical and cognitive deception modeling were published online on September 3 in the International Journal of Social Robotics. To test their algorithms, the researchers ran 20 hide-and-seek experiments with two autonomous robots.
iCub RobotCub ~ Official Site