background preloader

Kinect-eads

Facebook Twitter

Kinect - fingertip detection. Detecting biosignals with the Emotiv EPOC headset : a review. Knowledge-based prehension: capturing human dexterity. Neural Networks : A computational model for recognizing objects and planning hand shapes in grasping movements. Abstract To execute grasping movements, the primate brain must solve at least two computational problems (i.e. recognition of objects and planning of prehensile hand shapes).

From the viewpoint of computational theory, we hypothesize that the two problems are not separately solved in the brain; instead, they are merged and transformed into the problem of forming an integrated internal representation of visual information and motor information. To demonstrate the computational potential of our hypothesis, we propose a neural network model that integrates visual information and motor information for preshaping a hand in grasping movements. Network operation is divided into a learning phase and an optimization phase. In the learning phase, an internal model that represents the relation between the visual and motor information on grasped objects is acquired by integrating the two sources of information.

Keywords. Toward automatic robot instruction from perception-recognizing a grasp from observation. Deals with the programming of robots to perform grasping tasks. To do this, the assembly plan from observation (APO) paradigm is adopted, where the key idea is to enable a system to observe a human performing a grasping task, understand it, and perform the task with minimal human intervention.

A grasping task is composed of three phases: pregrasp phase, static grasp phase, and manipulation phase. The first step in recognizing a grasping task is identifying the grasp itself. The proposed strategy of identifying the grasp is to map the low-level hand configuration to increasingly more abstract grasp descriptions.

To achieve the mapping, a grasp representation is introduced, called the contact web, which is composed of a pattern of effective contact points between the hand and the object. Personal Robotics: Human Activity Detection. | Detection Project Overview | Anticipation Project Overview | Data/Code | Being able to detect and recognize human activities is essential for several applications, including smart homes and personal assistive robotics. In this paper, we perform detection and recognition of unstructured human activity in unstructured environments.

We use a RGBD sensor (Microsoft Kinect) as the input sensor, and compute a set of features based on human pose and motion, as well as based on image and point-cloud information. Popular Press E&T Magazine, Phys.org, R&D Magazine, Gizmag, GizmoWatch, myScience, WonderHowTo, Geekosystem. Data/Code Download Cornell Activity Datasets and Code Results Check out latest results on Cornell Activity Dataset 60.

Publications Learning Human Activities and Object Affordances from RGB-D Videos, Hema S Koppula, Rudhir Gupta, Ashutosh Saxena. Unstructured Human Activity Detection from RGBD Images, Jaeyong Sung, Colin Ponce, Bart Selman, Ashutosh Saxena. People Videos Related Projects. Teaching robots to copy human movement. Fraunhofer researchers have developed a robot input device that uses inertial sensors to detect movements in free space (Image: Fraunhofer) Having two arms doesn't make you a juggler.

The same principle applies in robotics where even the most dextrous of bots must be programmed to move according to a particular task. Input systems based on laser tracking are used in industrial robotics to achieve this, but Fraunhofer researchers are looking to streamline the process significantly with a device that uses inertial sensors to track movements in free space. In other words, you can teach a robot new tricks just by showing it the required action. The key to the system is its ability to analyze how the sensors on the input device interact.

"We have developed special algorithms that fuse the data of individual sensors and identify a pattern of movement," says project leader Bernhard Kleiner of the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart. About the Author. Teaching robots to identify human activities.

(PhysOrg.com) -- If we someday live in "smart houses" or have personal robots to help around the home and office, they will need to be aware of what humans are doing. You don't remind grandpa to take his arthritis pills if you already saw him taking them -- and robots need the same insight. So Cornell researchers are programming robots to identify human activities by observation. Their most recent work will be described at the 25th Conference on Artificial Intelligence in San Francisco, in an Aug. 7 workshop on "plan, activity and intent recognition.

" Ashutosh Saxena, assistant professor of computer science, and his research team report that they have trained a robot to recognize 12 different human activities, including brushing teeth, drinking water, relaxing on a couch and working on a computer. The work is part of Saxena's overall research on personal robotics. Others have tried to teach robots to identify human activities, the researchers note, using video cameras.