background preloader

Kinect-dataset

Facebook Twitter

The LIRIS human activities dataset. Main page | download | database structure | annotations in XML | camera calibration | annotation tools | evaluation tools You need to cite the following technical report in all publications including results for which you used the LIRIS dataset:. C Wolf, J. Mille, E. Lombardi, O. Celiktutan, M. Jiu, M. Baccouche, E Dellandrea, C. The LIRIS human activities dataset contains (gray/rgb/depth) videos showing people performing various activities taken from daily life (discussing, telphone calls, giving an item etc.). The dataset has been shot with two different cameras: Subset D1 has been shot with a MS Kinect module mounted on a remotely controlled Wany robotics Pekee II mobile robot which is part of the LIRIS-VOIR platform.Subset D2 has been shot with a sony consumer camcorder Table of contents Download All video data, XML data and software tools are available on our download page.

Organization, contact Send questions to christian.wolf (at) liris.cnrs.fr.

2d

Kinect@Home. Datasets « Nathan Silberman. MSRC-12 Microsoft Research Cambridge 12 Kinect gesture data set. The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as body-part locations, and the associated gesture to be recognized by the system. The data set includes 594 sequences and 719,359 frames-approximately six hours and 40 minutes-collected from 30 people performing 12 gestures.

In total, there are 6,244 gesture instances. The motion files contain tracks of 20 joints estimated using the Kinect Pose Estimation pipeline. The body poses are captured at a sample rate of 30Hz with an accuracy of about two centimeters in joint positions. The data set and details as to how it was produced are described in detail in the publication linked below. We believe it will be a useful data set for a evaluating gesture recognition systems, and as a database of Kinect skeletal track recordings and their variation across different persons. MPII Multi-Kinect Dataset | D2. RGBZ Videos. If you use our datasets, please cite our paper. Prototype camera The following datasets were all captured using our prototype RGBZ camera (SwissRanger SR4000 time-of-flight video camera with PointGrey Flea2 colour video camera): video resolution: 1024 × 768 (png) depth resolution: 176 × 144 (raw bitmaps) frame rate: 15 frames per second fruitsA (157 frames) Download (zip, 231 MB) fruitsB (180 frames) Download (zip, 260 MB) minA (260 frames) Download (zip, 307 MB) minB (473 frames) Download (zip, 558 MB) tableA (116 frames) Download (zip, 164 MB) tableB (210 frames) Download (zip, 295 MB) Microsoft Kinect The following datasets were captured from a Microsoft Kinect (the Xbox variant): video resolution: 1280 × 1024 (png) depth resolution: 640 × 480 (raw bitmaps) frame rate: 10 frames per second leszekA (144 frames) Download (zip, 440 MB) leszekB Download (zip, 355 MB)

Kinect Gesture Data Set. Zicheng Liu. MSR Action Recognition Datasets and Codes HON4D Code and MSRActionPairs Dataset MSRGesture3D (28MB): The dataset was captured by a Kinect device. There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. Each person performs each gesture 2-3 times. . {7,8,9} ->"ASL_Where"; {10,11,12} ->"ASL_Store"; {13,14,15} ->"ASL_Pig"; {16,17,18} ->"ASL_Past"; {19,20,21}->"ASL_Hungary"; {22.23,24}->"ASL_Green"; {25.26.27}->"ASL_Finish"; {28,29,30}->"ASL_Blue"; {31,32,33}->"ASL_Bathroom"; {34,35,36}->"ASL_Milk"; Each file is a MAT file which can be loaded with 64bit MATLAB.

X=load('sub_depth_01_01'); width = size(x.depth_part,1); height = size(x.depth_part,2); nFrames = size(x.depth_part,3); for(i=1:width) for(j=1:height) for(k=1:nFrames) depthval = x.depth_part(i,j,k); end end end The following two papers reported experiment results on this dataset: Alexey Kurakin, Zhengyou Zhang, Zicheng Liu, A Real-Time System for Dynamic Hand Gesture Recognition with a Depth Sensor, EUSIPCO, 2012.

Part 1 (440 MB) Computer Vision Group - Datasets - RGB-D SLAM Dataset and Benchmark. Contact: Jürgen Sturm We provide a large dataset containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. Our dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The data was recorded at full frame rate (30 Hz) and sensor resolution (640×480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz).

Further, we provide the accelerometer data from the Kinect. How can I use the RGB-D Benchmark to evaluate my SLAM system? Run your favorite visual odometry/visual SLAM algorithm (for example, RGB-D SLAM) Evaluate your algorithm by comparing the estimated trajectory with the ground truth trajectory. Further remarks If you wish to publish your trajectories on our website (for example to allow others to directly compare with your results), please contact us. RGB-D (Kinect) Object Dataset.