background preloader

RGBDToolkit - DSLR + DEPTH Filmmaking

RGBDToolkit - DSLR + DEPTH Filmmaking
Related:  Kinect_totEye & Head Tracking

Memory Room: Yuxi, Ryan & Cris « Creation and Computation - William Gibson We were interested in creating a project that allows a user to engage with a person in another time and place. By leaving a trace behind for others or engaging in the traces left behind for us, we become aware of the presence of the other even in their absence. We wanted to use the Kinect to track an individual in a given space and record his position after he has settled into the room. Unfortunately, we were unable to achieve this ideal vision, but were able to develop the base interaction that would make this possible. We initially wrote the interaction in pseudo-code and after identifying the major segments of the code, divided the research between the three of us. Track and convert the body into a silhouette Image capture, image loading and fading Timing of the interaction Item 1 required the use of the Kinect, while items 2-3 could be researched using the mouse as a stand-in for the Kinect data. One goal was to bring the projection off the wall and into space.

Google launches Glass Dev Kit preview, shows off augmented reality apps Today Google launched the Glass Development Kit (GDK) "Sneak Preview," which will finally allow developers to make real, native apps for Google Glass. While there have previously been extremely limited Glass apps that used the Mirror API, developers now have full access to the hardware. Google Glass runs a heavily skinned version of Android 4.0.4, so Glass development is very similar to Android development. Update: Documentation for the GDK has come out, and any developer can add a voice trigger to the "Ok Glass" menu. Google showed off a few of the first native Glass apps, and one of the coolest among them was Wordlens, a real-time, augmented-reality translation app. Another interesting app that Google showed off was Spellista, a word scramble game that requires use of Glass' head tracking to select each letter. The full GDK walkthrough is embedded above, but don't pay too much attention to the terrible frame rate displayed by Glass.

www.cs.unc.edu/~fuchs/kinect_VR_2012.pdf Reference for Simple-OpenNI and the Kinect - Learning Context The context is the top-level object that encapsulates all the camera and image functionality. The context is typically declared globally and instantiated within setup(). SimpleOpenNI context = new SimpleOpenNI(this) SimpleOpenNI context = new SimpleOpenNI(this, SimpleOpenNI.RUN_MODE_SINGLE_THREADED) SimpleOpenNI context = new SimpleOpenNI(this, SimpleOpenNI.RUN_MODE_MULTI_THREADED) For each frame in Processing, the context needs to be updated with the most recent data from the Kinect. context.update() The image drawn by the context defaults to showing the world from its point of view, so when facing the kinect and looking at the resulting image, your movements are not mirrored. context.setMirror(true) context.setMirror(false) Images The different cameras provide different data and functionality. The RGB camera is the simplest camera and does no more than a standard webcam. context.enableRGB() To create a window the same size as the RGB camera image, use rgbHeight() and rgbWidth(). Depth

1.4. Matplotlib: plotting — Scipy lecture notes 1.4.2. Simple plot Tip In this section, we want to draw the cosine and sine functions on the same plot. Starting from the default settings, we’ll enrich the figure step by step to make it nicer. First step is to get the data for the sine and cosine functions: import numpy as np X = np.linspace(-np.pi, np.pi, 256, endpoint=True)C, S = np.cos(X), np.sin(X) X is now a numpy array with 256 values ranging from -π to +π (included). To run the example, you can type them in an IPython interactive session: This brings us to the IPython prompt: IPython 0.13 -- An enhanced Interactive Python. ? You can also download each of the examples and run it using regular python, but you will loose interactive data manipulation: You can get source for each step by clicking on the corresponding figure. 1.4.2.1. Matplotlib comes with a set of default settings that allow customizing all kinds of properties. 1.4.2.2. 1.4.2.3. 1.4.2.4. ...pl.xlim(X.min() * 1.1, X.max() * 1.1)pl.ylim(C.min() * 1.1, C.max() * 1.1)... Hint

Shaking some sense into using multiple Kinect's with Shake 'n' Sense | Coding4Fun Kinect Projects This is one of those weird things that you just wouldn't expect until you see it... Shake n Sense Makes Kinects Work Together! Microsoft Research has discovered that shaking Kinects, far from making them fall apart, makes them work together. See it in action in the video.This is one of those ideas that once you have seen it you can't believe you didn't think of it first. The only barrier to thinking of it is that you might not be thinking big enough. Project Information URL: Shake 'n' Sense is a novel yet simple mechanical technique for mitigating the interference when two or more Kinect cameras point at the same part of a physical scene. Reducing Structured Light Interference when Multiple Depth Cameras Overlap ABSTRACTWe present a method for reducing interference between multiple structured light-based depth sensors operating in the same spectrum with rigidly attached projectors and cameras.

Identifying People in a Scene with the Kinect - Learning We'll start with the sketch we wrote to draw the depth image from the Kinect in Drawing Depth with the Kinect: import SimpleOpenNI.*; SimpleOpenNI context; void setup(){ // instantiate a new context context = new SimpleOpenNI(this); // enable depth image generation context.enableDepth(); // create a window the size of the scene size(context.depthWidth(), context.depthHeight()); } void draw(){ // update the camera context.update(); // draw depth image image(context.depthImage(),0,0); } The OpenNI library does all the work for us to identify people or other moving objects in a scene. It does this with the scene analyser which is turned on by adding context.enableScene() to the setup(). The depth image also needs to be enabled in order for the scene to be analysed. We then need to draw the scene image, rather than the depth image, in the draw() function: void draw(){ // update the camera context.update(); // draw scene Image image(context.sceneImage(), 0, 0); } Try running the sketch.

This Is Your Computer Watching The Matrix “For example, in the explosion part of the Inception scene, the systems shifts its attention to the rapidly moving bits and pieces of the buildings as they fly across the screen,” as seen above. Would the artist contrast this visually to human eye-tracking? ”I haven’t paired it with eyetracking software, but I definitely recommend the great work by Tim Smith and David Bordwell on eyetracking movie watching,” Grosser says. ”They’ve done studies where they correlate a number of eyetracked subjects, and use that data to create clips so you can see where everyone focuses their attention.” But are we watching when we’re watching the watchers? “I think Computers Watching Movies allows the viewer to do this a bit for themselves; as the viewer watches a clip they engage their memory of the scene and try to compare what they recall looking at with what the computer is looking at.” We’re literally superimposing our memories; it’s a meta sort of synesthesia.

From Kinect to MakerBot Make: Projects The Open Kinect movement has given us some amazing tools to capture the physical world. With some open source software, a few simple steps, and an occasional not-so-simple-step here and there, you can print what your Kinect can see. You’ve got a Kinect, and you’ve got a MakerBot. This guide explains how to scan something using the Kinect, and then to print it on the MakerBot. It’s very easy to scan something with the Kinect, but the models that you get from it are quite complex. At the moment, these instructions are tested on the Mac. OpenNI - OpenNI > Downloads > OpenNI Modules I wanted to try Kinect from the day it got released but i didn’t have the chance till yesterday. It really is an incredible little thing, you can use for many different projects. Below you can find easy steps in order to install the necessary drivers on your PC in order to have a working connection with Kinect. Before we begin please note that for some drivers you will have to choose between Development and Redist editions. Procedure Disconnect Kinect from PC if it is connected and visit OpenNI website: OpenNI downloads pageFirst we are going to download OpenNI Binaries. You have successfully installed drivers for Kinect on your PC. Multiple Kinect Drivers If you want to have a different Kinect driver then do the following: Open Device ManagerRight-click Kinect Camera under PrimeSenseSelect Update driver softwareSelect Browse my computer for driver software and Let me pick from a list of device drivers on my computerChoose the driver you want (CLNUI for example) Use Kinect in Unity

Pourquoi on reconnaît ses amis dans la foule, même quand on ne voit pas leur visage Vous êtes dans la rue, et au loin, mais vraiment loin, tellement loin que vous ne pouvez pas distinguer son visage, vous reconnaissez un(e) de vos ami(e)s. Mais, si vous n’avez pas vu sa tête, comment donc avez-vous fait pour savoir qui c’était? Grâce à sa morphologie, expliquent les psychologues de l’université de Dallas-Texas, qui publient leur étude dans Psychological Science. Pour arriver à ce résultat, ils ont montré à des participants des photos où l’on distinguait mal le visage des personnes. publicité Le résultat de l’étude montre que les participants s’en sont mieux sortis lorsqu’ils pouvaient «voir les corps des personnes sur les photographies», ce qui n'était pas le cas «quand ils ne pouvaient voir que leur visage». Puisqu’ils ne pouvaient pas obtenir d’information pertinente en ne regardant que les visages, les participants ont passé plus de temps sur le corps des personnes sur les images. À lire aussi sur Slate.fr

Support | Skanect by Manctl Contact You can email us at skanect@occipital.com or get help from the Skanect community, in the skanect google group. Tutorials What sensor drivers should I install? How can I use Structure Sensor and Skanect? Do you support the Kinect for Xbox One (Kinect V2)? Unfortunately, we have chosen not to support the Kinect for Xbox One (Kinect V2), as during our tests, the resulting 3D scans did not meet our standards for quality. You can find a complete list of supported devices on our download page: Bad License Key Error – Why doesn’t my license key work? Please ensure there are no “blank spaces” before or after the email and key when you enter them. If you still receive an error message let us know by emailing us skanect@occipital.com What sensor should I buy? Each sensor has its pro and cons. My Kinect for XBox sensor cannot be detected! What graphics cards are supported? My GPU should be supported, but GPU fusion is disabled, why?

Matt's Webcorner - Kinect Sensor Programming The Kinect is an attachment for the Xbox 360 that combines four microphones, a standard RGB camera, a depth camera, and a motorized tilt. Although none of these individually are new, previously depth sensors have cost over $5000, and the comparatively cheap $150 pricetag for the Kinect makes it highly accessible to hobbyist and academics. This has spurred a lot of work into creating functional drivers for many operating systems so the Kinect can be used outside of the Xbox 360. Drivers There are a lot of different people working on different drivers for different operating systems. The drivers don't do any postprocessing on the data, they just give you access to the raw data stream from the Kinect and let you control the LED and motor. Audio Most people talking about the Kinect are focusing on the depth sensor. Interpreting Sensor Values The raw sensor values returned by the Kinect's depth sensor are not directly proportional to the depth. Painting

Hands-on with Google’s latest acquisition: Flutter, a webcam gesture app A company called Flutter has just announced that it has been purchased by Google. Flutter is a simple Windows and Mac OSX app that lets you control popular media players through a webcam. Just put your hand up to stop the media playback, or point your thumb right for "next" and left for "previous." It seems that few people had heard of Flutter (yours truly included) until Google took the company under its wing, but luckily the app is still available for download, so we snagged it and gave it a quick test. The app works fantastically well, and hand gesture detection is near-instant. When we started three years ago, our dream to build a ubiquitous and power-efficient gesture recognition technology was considered by many as just "a dream," not a real possibility. The next question is, what will Google do with a webcam gesture app?

Related: