background preloader

Kinect Physics Tutorial for Processing

Kinect Physics Tutorial for Processing

Getting Started with Kinect and Processing So, you want to use the Kinect in Processing. Great. This page will serve to document the current state of my Processing Kinect library, with some tips and info. The current state of affairs Since the kinect launched in November 2010, there have been several models released. Kinect 1414: This is the original kinect and works with the library documented on this page in Processing 2.1 Kinect 1473: This looks identical to the 1414, but is an updated model. Now, before you proceed, you could also consider using the SimpleOpenNI library and read Greg Borenstein’s Making Things See book. I’m ready to get started right now What hardware do I need? First you need a “stand-alone” kinect (model 1414 only for now!). Standalone Kinect Sensor If you have a previous kinect that came with an XBox, it will not include the USB adapter. Kinect Sensor Power Supply Um, what is Processing? What if I don’t want to use Processing? ofxKinectKinect CinderBlock More resources from: The OpenKinect Project So now what? 1. Selected Tools Recipe 50: Branching - Cycling 74 Makers of Max Visual Programming Software for Artists, Musicians & Performers General Principles Animating OpenGL objectsUsing audio to drive motionWorking with jit.path for smooth motion Commentary After the RoboMoves recipe, I wanted to try a slightly more complex structure and work with a simple audio responsive motion control. The repeated, branching structure of a tree provides the basis for this animation patch, and we use the amplitude of an incoming audio signal to scrub around the animation. Ingredients Technique First, we’ll look at the structure of the animation nodes. To animate the tree, we use jit.path, just like in RoboMoves, but instead of a looping counter, we have a peakamp~ object driving the eval message.

Kinect MoCap Animation in After Effects — Part 1: Getting Started | Victoria Nece This tutorial is now obsolete. Check out the new KinectToPin website for the latest version of the software and how to use it — it’s dramatically easier now. Hello, I’m Victoria Nece. I’m a documentary animator, and today I’m going to show you how to use your Kinect to animate a digital puppet like this one in After Effects. If you have a Kinect that came with your Xbox, the first thing you’re going to need to do is buy an adapter so you can plug it into your computer’s USB port. You don’t need to get the official Microsoft one — I got a knockoff version from Amazon for six bucks and it’s working just fine. Next you’re going to need to install a ton of different software. Here’s a quick overview of how it’s all going to work. Then on the After Effects side of things, you’ll set up a skeletal rig for a layered 2D puppet and apply the tracking data to bring it to life. It’s not an easy process, but the results are worth it. Required Software:

OpenKinect All posts Marcin Ignac Data Art with Plask and WebGL @ Resonate My talk at Resonate'13 about Plask and how I use it for making data driven visualizations Fast Dynamic Geometry in WebGL Looking for fast way to update mesh data dynamically. Piddle Urine test strip analysis app Evolving Tools @ FITC My talk at FITC Amsterdam about the process behind some of my data visualization, generative art projects and Plask. Ting Browser Experimental browsing interface for digital library resources Bring Your Own Beamer BYOB is a "series of exhibitions hosting artists and their beamers". Bookmarks as metadata Every time we bookmark a website we not only save it for later but add a piece of information to the page itself. Timeline.js A compact JavaScript animation library with a GUI timeline for fast editing. SimpleGUI SimpleGUI is a new code block developed by me for Cinder library. Cindermedusae - making generative creatures Cindermedusae is quite a special project for me. Effects in Delta ProjectedQuads source code

Guide to Camera Types for Interactive Installations / Guest post by Blair Neal (@laserpilot) (image source) Choosing the right type of camera for your interactive installation is one of the most important technical choices you can make in your initial planning phases. Making the incorrect choice can really impact how well your installation reacts to its victims and it can also impact its ability to perform robustly in a large amount of environments. Whether you’re working in Processing, OpenFrameworks, Max/MSP/Jitter, Quartz Composer , Cinder, VVVV, or really any artistically geared programming environment, your choice of camera can impact your work no matter the software. Each type of imager typically has some amazing strengths and some really hindering downfalls, so it’s important to keep these kinds of questions in mind when planning your installation. Questions to consider when in planning phases: • Where is it being set up? Let’s start with the most basic and accessible options for most people working with interactive installations for the first time: — 1. (image source) 2.

NVIDIA® DIGITS™ DevBox | NVIDIA Developer Deep learning is one of the fastest-growing segments of the machine learning/artificial intelligence field and a key area of innovation in computing. With researchers creating new deep learning algorithms and industries producing and collecting unprecedented amounts of data, computational capability is the key to unlocking insights from data. GPUs have brought tremendous value to deep learning research over the past couple of years. *Monitor, keyboard, and mouse not included The DIGITS DevBox combines the world’s best hardware, software, and systems engineering: Four TITAN X GPUs with 7 TFlops of single precision, 336.5 GB/s of memory bandwidth, and 12 GB of memory per board NVIDIA DIGITS software providing powerful design, training, and visualization of deep neural networks for image classification Pre-installed standard Ubuntu 14.04 w/ Caffe, Torch, Theano, BIDMach, cuDNN v2, and CUDA 7.0 A single deskside machine that plugs into standard wall plug, with superior PCIe topology