background preloader

Kinect-dataset

Facebook Twitter

The LIRIS human activities dataset. Main page | download | database structure | annotations in XML | camera calibration | annotation tools | evaluation tools You need to cite the following technical report in all publications including results for which you used the LIRIS dataset:. C Wolf, J. Mille, E. Lombardi, O. Celiktutan, M. The LIRIS human activities dataset contains (gray/rgb/depth) videos showing people performing various activities taken from daily life (discussing, telphone calls, giving an item etc.). The dataset has been shot with two different cameras: Subset D1 has been shot with a MS Kinect module mounted on a remotely controlled Wany robotics Pekee II mobile robot which is part of the LIRIS-VOIR platform.Subset D2 has been shot with a sony consumer camcorder Table of contents Download All video data, XML data and software tools are available on our download page.

Organization, contact Send questions to christian.wolf (at) liris.cnrs.fr.

2d

Kinect@Home. Datasets « Nathan Silberman. MSRC-12 Microsoft Research Cambridge 12 Kinect gesture data set. The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as body-part locations, and the associated gesture to be recognized by the system.

MSRC-12 Microsoft Research Cambridge 12 Kinect gesture data set

The data set includes 594 sequences and 719,359 frames-approximately six hours and 40 minutes-collected from 30 people performing 12 gestures. In total, there are 6,244 gesture instances. The motion files contain tracks of 20 joints estimated using the Kinect Pose Estimation pipeline. The body poses are captured at a sample rate of 30Hz with an accuracy of about two centimeters in joint positions. The data set and details as to how it was produced are described in detail in the publication linked below. The full dataset has been released on the 5th of June 2012 and can be downloaded at Microsoft Research Download.

@InProceedings{ msrc12, title = "Instructing people for training gestural interactive systems", author = "Simon Fothergill and Helena M. MPII Multi-Kinect Dataset. RGBZ Videos. If you use our datasets, please cite our paper. Prototype camera The following datasets were all captured using our prototype RGBZ camera (SwissRanger SR4000 time-of-flight video camera with PointGrey Flea2 colour video camera): video resolution: 1024 × 768 (png) depth resolution: 176 × 144 (raw bitmaps) frame rate: 15 frames per second fruitsA. Kinect Gesture Data Set. Zicheng Liu. MSR Action Recognition Datasets and Codes HON4D Code and MSRActionPairs Dataset MSRGesture3D (28MB): The dataset was captured by a Kinect device.

Zicheng Liu

There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. Each person performs each gesture 2-3 times. There are 336 files in total, each corresponding to a depth sequence. . {7,8,9} ->"ASL_Where"; {10,11,12} ->"ASL_Store"; {13,14,15} ->"ASL_Pig"; {16,17,18} ->"ASL_Past"; {19,20,21}->"ASL_Hungary"; {22.23,24}->"ASL_Green"; {25.26.27}->"ASL_Finish"; {28,29,30}->"ASL_Blue"; {31,32,33}->"ASL_Bathroom"; {34,35,36}->"ASL_Milk"; Each file is a MAT file which can be loaded with 64bit MATLAB. X=load('sub_depth_01_01'); width = size(x.depth_part,1); height = size(x.depth_part,2); nFrames = size(x.depth_part,3); for(i=1:width) for(j=1:height) for(k=1:nFrames) depthval = x.depth_part(i,j,k); end end end The following two papers reported experiment results on this dataset: The format of the skeleton file is as follows.

Computer Vision Group - Datasets - RGB-D SLAM Dataset and Benchmark. Contact: Jürgen Sturm We provide a large dataset containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. Our dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The data was recorded at full frame rate (30 Hz) and sensor resolution (640×480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz). Further, we provide the accelerometer data from the Kinect. How can I use the RGB-D Benchmark to evaluate my SLAM system? Run your favorite visual odometry/visual SLAM algorithm (for example, RGB-D SLAM) Evaluate your algorithm by comparing the estimated trajectory with the ground truth trajectory.

Further remarks. RGB-D (Kinect) Object Dataset.