Kinect-eads

Facebook Twitter

Kinect - fingertip detection. Detecting biosignals with the Emotiv EPOC headset : a review. Knowledge-based prehension: capturing human dexterity. A major question facing the development of sophisticated robotics systems is how to capture the functionality seen in versatile living systems. An approach that has proven useful in designing complex systems is to capture the explicit constraints in a knowledge based system.

A knowledge-based planning system under development is reported which attempts to capture the versatility of human prehension. Neural Networks : A computational model for recognizing objects and planning hand shapes in grasping movements. Abstract To execute grasping movements, the primate brain must solve at least two computational problems (i.e. recognition of objects and planning of prehensile hand shapes).

Neural Networks : A computational model for recognizing objects and planning hand shapes in grasping movements

From the viewpoint of computational theory, we hypothesize that the two problems are not separately solved in the brain; instead, they are merged and transformed into the problem of forming an integrated internal representation of visual information and motor information. To demonstrate the computational potential of our hypothesis, we propose a neural network model that integrates visual information and motor information for preshaping a hand in grasping movements. Network operation is divided into a learning phase and an optimization phase. In the learning phase, an internal model that represents the relation between the visual and motor information on grasped objects is acquired by integrating the two sources of information. Toward automatic robot instruction from perception-recognizing a grasp from observation.

Deals with the programming of robots to perform grasping tasks. To do this, the assembly plan from observation (APO) paradigm is adopted, where the key idea is to enable a system to observe a human performing a grasping task, understand it, and perform the task with minimal human intervention. A grasping task is composed of three phases: pregrasp phase, static grasp phase, and manipulation phase. The first step in recognizing a grasping task is identifying the grasp itself. Personal Robotics: Human Activity Detection. | Detection Project Overview | Anticipation Project Overview | Data/Code | Being able to detect and recognize human activities is essential for several applications, including smart homes and personal assistive robotics.

Personal Robotics: Human Activity Detection

In this paper, we perform detection and recognition of unstructured human activity in unstructured environments. We use a RGBD sensor (Microsoft Kinect) as the input sensor, and compute a set of features based on human pose and motion, as well as based on image and point-cloud information. Popular Press. Teaching robots to copy human movement. Fraunhofer researchers have developed a robot input device that uses inertial sensors to detect movements in free space (Image: Fraunhofer) Having two arms doesn't make you a juggler.

Teaching robots to copy human movement

The same principle applies in robotics where even the most dextrous of bots must be programmed to move according to a particular task. Input systems based on laser tracking are used in industrial robotics to achieve this, but Fraunhofer researchers are looking to streamline the process significantly with a device that uses inertial sensors to track movements in free space. In other words, you can teach a robot new tricks just by showing it the required action. Teaching robots to identify human activities. (PhysOrg.com) -- If we someday live in "smart houses" or have personal robots to help around the home and office, they will need to be aware of what humans are doing.

Teaching robots to identify human activities

You don't remind grandpa to take his arthritis pills if you already saw him taking them -- and robots need the same insight. So Cornell researchers are programming robots to identify human activities by observation. Their most recent work will be described at the 25th Conference on Artificial Intelligence in San Francisco, in an Aug. 7 workshop on "plan, activity and intent recognition. " Ashutosh Saxena, assistant professor of computer science, and his research team report that they have trained a robot to recognize 12 different human activities, including brushing teeth, drinking water, relaxing on a couch and working on a computer. The work is part of Saxena's overall research on personal robotics.