Minority Report Interface Description: A minority report-like interface that lets you drag around photos Submitted By: Garratt Gallagher Keywords: Minority Report Video openni/Contests/ROS 3D/Minority Report Interface
openni/Contests/ROS 3D/RGBD-6D-SLAM Description: The Kinect is used to generate a colored 3D model of an object or a complete room. Submitted By: Felix Endres, Juergen Hess, Nikolas Engelhard, Juergen Sturm, Daniel Kuhner, Philipp Ruchti, Wolfram Burgard Keywords: RGBD-SLAM, 3D-SURF, Feature Matching, RANSAC, Graph SLAM, Model Generation, Real-time This page describes the software package that we submitted for the ROS 3D challenge.
openni_kinect/kinect_accuracy This page discusses both the precision and the accuracy of the Kinect sensor. If you are not sure what the difference between precision and accuracy is, check out this Wikipedia page . Precision of the Kinect sensor
OctoMap - 3D occupancy mapping
3D Stixels Obtained from Stereo Data in a Urban Environment
3D mapping with Kinect style depth camera
Build a 3D Scanner From A $25 Laser Level - Systm I've made a lot of television in the last decade and gotten to work with a lot of fun people. Leo Laporte, Kevin Rose, Roger Chang, Robert Heron, Serafina, Jim, the whole Revision3 crew... the list could go on for a while. I have to say some of my most treasured memories will involve a pair of Davids: David Randolph and David Calkins and the -truly random- work we've done on Systm. Fire, molten steel, high voltage stupidity, long range WiFI, soldering, software tweaks, power tools, game controllers, six ton cranes... over the course of 97 or so of the 108 episodes of Systm, I've gotten to have a lot of fun and build a lot of projects. There's also been a lot of insane 18 hours shoots, second degree burns, damaged hardware and massive amounts of stress, because, frankly, we've been doing a lot of work with very few people. Episode 109 marks a major change in Systm.
How to build your own 3-D camera If you are interested in developing your own 3-D camera based on the principle of structured light, then look no further than The Hackengineer web site, which plans to provide a blow by blow account of how to do so. Structured lighting techniques use a set of images that are sequentially projected onto a scene that are then captured with an ordinary CMOS camera . The deformation of the structured light is run though an algorithm to determine the depth at each pixel. The resulting x,y, and z coordinates ( 3-D point cloud ) can then be used to reproduce a 3-D model of a scene. In the first of the posts on the The Hackengineer web site, the hardware of the system is described in some detail.