The Microsoft Kinect sensor is a peripheral device (designed for XBox and windows PCs) that functions much like a webcam. However, in addition to providing an RGB image, it also provides a depth map. Meaning for every pixel seen by the sensor, the Kinect measures distance from the sensor. This makes a variety of computer vision problems like background removal, blob detection, and more easy and fun! The Kinect sensor itself only measures color and depth. However, once that information is on your computer, lots more can be done like “skeleton” tracking (i.e. detecting a model of a person and tracking his/her movements). What hardware do I need? First you need a “stand-alone” kinect. Standalone Kinect Sensor v1. Some additional notes about different models: Kinect 1414: This is the original kinect and works with the library documented on this page in the Processing 3.0 beta series. SimpleOpenNI I’m ready to get started right now What is Processing? What if I don’t want to use Processing?
Related: Kinect Hacks
OpenKinectKinect - Medien WikiThe Microsoft® XBOX 360 Kinect is a motion controller developped by PrimeSense. It projects a pattern with infrared light and calculates a depth image using a camera. It also has a color camera and four microphones. About Blogs and portals Software Applications CocoaKinect App Freenect by Robert Pointon Synapse generates sceleton data and provides it as OSC[[OSC|Open Sound Control]] ofxFaceTracker provides face metrics (orientation, eye and mouth open/closed) over OSC[[OSC|Open Sound Control]] codelaboratories.com/kb/nui TUIO Kinect lets you define a depth range where multiple blobs can be detected. SDKs, Frameworks and Libraries Depth image openkinect.org Drivers, Installation pix_freenect for Pd[[Pure Data]] a dataflow programming environment (incl. binaries work without any compiling) fux_kinect Object for Pd[[Pure Data]] a dataflow programming environment ofxKinect openFramworks Kinect integration vvvv kinect integration Skeleton data Successors/Competitors
PCL - Point Cloud Library (PCL)Kinect Hacks - Supporting the Kinect Hacking news and communityKinect Hacking using ProcessingAbout Processing from Processing.org: Processing is an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing also has evolved into a tool for generating finished professional work. Today, there are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production. About Kinect The Kinect is a stereo camera (actually triple camera including the IR sensor) that has some pretty sophisticated firmware algorithms that can spit out a wide variety of depth and motion tracking data. About this tutorial "Kinect for Processing" involves configuring a set of libraries that can be compiled with the Processing programming environment to parse and manipulate Kinect data. UPDATED NOTES ON CERTAIN KINECT MODELS! HOWEVER! Gah!
libfreenect/OpenNI2-FreenectDriver at master · OpenKinect/libfreenectSetting up Kinect on Mac | black label creativeUpdate 27/04/2013: Latest test of the OpenNI 2.1.0 beta and NITE2 was good but it’s not working with the SimpleOpenNI library yet. I’ll keep watching for updates and let you know when it’s all running. Thanks to open source projects like OpenNI and OpenKinect, you can now use Microsoft’s Kinect on more than just Windows. This guide is for those running OSX 10.6.8 or newer but might also be applicable to anyone still running older versions or Linux. The main parts involved here are OpenNI, SensorKinect, NITE. Which Kinect? The hardware and interface differences mean that, for this guide and my experience at least, you will need an Xbox Kinect. Before we begin There are some things you’ll need to install before we can start with the Kinect utilities. First up is Xcode. Secondly you’ll need to get MacPorts. It’s worth restarting at the point (if MacPorts doesn’t get you to do it anyway) just to make sure any dependencies that are loaded at startup are there. The dependencies The Kinect
kinect_openniLe Framework OpenNI (les API) et les pilotes (Sensor) sont en licence GNUGPL Creation de l'environnement de travail (par défaut, dans /home/$USER/) mkdir ~/kinect && cd ~/kinect Récupération des fichiers avec git git clone Compilation et installation Les paramètres de compilation sont en SSE3, par défaut. cat /proc/cpuinfo Si, dans les flags vous voyez sse3 ou msse3 ou ssse3, vous n'avez pas a changer les paramètres de compilation. cd OpenNI/Platform/Linux/Build make && sudo make install Le contenu de ce wiki est sous licence : CC BY-SA v3.0
kinect_calibration/technicalDescription: Technical aspects of the Kinect device and its calibration Tutorial Level: ADVANCED Authors: Kurt Konolige, Patrick Mihelich Imager and projector placement The Kinect device has two cameras and one laser-based IR projector. This image is provided by iFixit. All the calibrations done below are based on IR and RGB images of chessboard patterns, using OpenCV's calibration routines. Depth calculation The IR camera and the IR projector form a stereo pair with a baseline of approximately 7.5 cm. Depth is calculated by triangulation against a known pattern from the projector. Disparity to depth relationship For a normal stereo system, the cameras are calibrated so that the rectified images are parallel and have corresponding horizontal lines. z = b*f / d, where z is the depth (in meters), b is the horizontal baseline between the cameras (in meters), f is the (common) focal length of the cameras (in pixels), and d is the disparity (in pixels). d = 1/8 * (doff - kd), to find b and doff.
OpenKinectProgramming for Kinect 3 – A simple Kinect App in Processing | 3dsense blogAfter (hopefully) successful installation of the OpenNI drivers, we will finally get our hands dirty on some code to create our first Kinect-enabled application. To make things easier we’ll be using Processing.org and simple-openni. Processing.org is a java-based programming environment that … After (hopefully) successful installation of the OpenNI drivers, we will finally get our hands dirty on some code to create our first Kinect-enabled application. To make things easier we’ll be using Processing.org and simple-openni. Processing.org is a java-based programming environment that simplifies working with graphics (drawing, image processing) both in 2D and 3D. Simple-openni is a processing wrapper for OpenNI/NITE, that allows full access to the OpenNI’s (and Kinect’s) API. Simple RGB & Depth viewer In this post we will be creating a simple viewer application that gets both RGB and depth image from the sensor and display it in the realtime – similar to what the screenshot below shows.
Installation Kinect LinuxInstallation des différents logiciels et drivers pour utiliser la kinect sous linux En fait il y a plusieurs couches logicielles plus ou moins interdépendantes : libfreenect : driver open source qui permet de récupérer les flux vidéo de la kinect OpenNI : logiciel fabriqué par la boite qui a inventé la kinect, permet de récupérer des images et infos brutes PrimeSensor : idem, couche en plus qui permet de récupérer des infos + élaborées : présence d'une personne, squelette, mains. Simple quoi... Il sera traité ici de l' Debian jessie instable 64bits et Ubuntu 10.04 Lucid, cette méthode doit être valable pour d'autres versions/distro NB: sous Linux, quel que soit le programme que vous utilisez, il faut connecter votre Kinect sur un port USB 2 , et non pas USB 3. Installation de libfreenect Debian jessie instable 64 bits Installation de libfreenect Installer la version git à la main, la version du dépot semble ne pas marcher avec Nite2, par exemple, si vous obtenez ça : avec en plus et Test Voilà! . .