background preloader

Kinect Projector Dance

Kinect Projector Dance

OpenKinect Kinect - Medien Wiki The Microsoft® XBOX 360 Kinect is a motion controller developped by PrimeSense. It projects a pattern with infrared light and calculates a depth image using a camera. It also has a color camera and four microphones. The Y axis of the sensor is remote controllable with an in-build motor. The color of the LED is settable by software as well. About Blogs and portals Software Applications CocoaKinect App Freenect by Robert Pointon Synapse generates sceleton data and provides it as OSC ofxFaceTracker provides face metrics (orientation, eye and mouth open/closed) over OSC codelaboratories.com/kb/nui TUIO Kinect lets you define a depth range where multiple blobs can be detected. SDKs, Frameworks and Libraries Depth image openkinect.org Drivers, Installation pix_freenect for Pd (incl. binaries work without any compiling) fux_kinect Object for Pd ofxKinect openFramworks Kinect integration vvvv kinect integration Skeleton data Running depth image and skeleton data on workstation Rafael

Daniel Shiffman The Microsoft Kinect sensor is a peripheral device (designed for XBox and windows PCs) that functions much like a webcam. However, in addition to providing an RGB image, it also provides a depth map. Meaning for every pixel seen by the sensor, the Kinect measures distance from the sensor. This makes a variety of computer vision problems like background removal, blob detection, and more easy and fun! The Kinect sensor itself only measures color and depth. What hardware do I need? First you need a “stand-alone” kinect. Standalone Kinect Sensor v1. Some additional notes about different models: Kinect 1414: This is the original kinect and works with the library documented on this page in the Processing 3.0 beta series. SimpleOpenNI You could also consider using the SimpleOpenNI library and read Greg Borenstein’s Making Things See book. I’m ready to get started right now What is Processing? What if I don’t want to use Processing? What code do I write? import org.openkinect.processing Kinect kinect;

openFrameworks Skeleton Tracking using Kinect : Overview - Voratima I’ve been wanted to experiment with skeleton tracking with Microsoft Kinect in forever. Finally, that time has arrived. And, boy! There are many ways you can do this. I did a lot of research at the the beginning trying to understand how all the pieces fit together, which frameworks/SDK to use, etc. Some what confused at first, I thought, I should start with a list of requirements. Open sourceRun on Mac (and maybe Windows too)Widely adopted (which also implies a large community and easy to get supports)Give access to low level APIsLend itself to interactive wall projection There are five elements you will need to do the development: development environmentthe programming languageKinect driver or libraryDriver/Library wrapperthe Kinect itself Development Environments First let’s start with development environment i.e. a place where you actually do the coding. Xcode is a development environment for creating apps for Mac or iOS devices. Visual Studio will get the job done for Windows users.

Related: