background preloader

KInect

Facebook Twitter

What does the dot product mean? The dot product is a special method of multiplying vectors.

What does the dot product mean?

It takes the first vector, and it takes the aligned component of the second vector, and multiplies the two. The dot product always returns a scalar. You can find A dot B as follows: A · B = |A| * |B| * cos(theta) where |A| indicates magnitude of A, and theta is the angle of misalignment of vectors A and B. Component-wise, dot products can be calculated as such: A · B = Ax*Bx + Ay*By + Az*Bz If the dot product of two vectors is positive, it indicates that the vectors have either full alignment or some alignment.

If the dot product of two vectors is zero, it indicates either one or both of the vectors is zero, or that the vectors are perpendicular. If the dot product is negative, it indicates that the vectors are somewhat, or fully opposite. W = F · d If you push in the direction of motion, you do positive work on your target, increasing its kinetic energy (and thus speed). The urban canuk, eh: Comparing Kinect v1 and v2 Depth Data. In this post I show how the new Kinect v2 stacks up against the previous sensor.

the urban canuk, eh: Comparing Kinect v1 and v2 Depth Data

I’ve been blogging about the new Kinect as part of the Kinect for Windows Developer Preview. First and foremost, I have to get this legal disclaimer out of the way… This is an early preview of the new Kinect for Windows, so the device, software and documentation are all preliminary and subject to change. As part of this post, I thought it would be fun to build an application that targets both versions of the device and toggle back and forth between them for comparison sake.

There’s a little bit of magic involved as both have the same namespaces and assembly name. Make a copy of the v1.8 Microsoft.Kinect.dll as Microsoft.Kinect.v1.dll In your project, add a reference this assembly and change the alias from global to v1. 1.extern alias v1;

Vision Training

How does SDK transform from the depth space to skeleton space. The calculation conversion is really just simple geometry, projecting a pixel in from the depth image into three-dimensional space, based on its distance from the camera.

how does SDK transform from the depth space to skeleton space

Consider this picture, an overhead view of the camera and its field of view: The blue circle at the bottom is the camera; the grey triangle is the camera's field of view. The two points within the field of view represent two possible depth values, d1 and d2, for a specific pixel at (x, y) in the depth image. The projection into world space (what Kinect calls "skeleton coordinates") uses the depth, and knowledge of the field-of-view angle, to calculate the offset from the center of the image. For d1, this results in an x skeleton coordinate of x1; for d2, the result is x2. The forumula used for the calculation is essentially this: xSkeleton = ((xNorm - 0.5) * depth * multiplier) ...where: Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream - VBandi's blog - Dotneteers.net. In this part of my Kinect Interaction blog post series, we go deep into the rabbit hole, and examine the foundation of Kinect Interactions – the InteractionStream, upon which the entire library is built.

Kinect Interactions with(out) WPF – Part III: Demystifying the Interaction Stream - VBandi's blog - Dotneteers.net

This is a risky ride – with no official documentation, we can only count on our trusty reflector, the source code of the Kinect Interaction SDK and careful exploration. You only need to access the treasures of InteractionStream, if you want to go beyond what the KinectRegion and other controls provide. For example, you want to create your own KinectRegion, you want to zoom a map by gripping it with two hands, or want to build your entirely new interaction model, using two hands along with the press and grip gestures. Skeletal Joint Smoothing White Paper. Measurement errors and noise are by-products of almost any system that measures a physical quantity via a sensor.

Skeletal Joint Smoothing White Paper

The characteristics of this error are usually described by accuracy and precision of the system, where accuracy is defined as the degree of closeness of measured quantity to its actual value, and precision is defined as the degree to which repeated measurements are close to each other. An accurate system does not have any systematic error in measurements and therefore does not add a systematic bias. A precise system results in measurements close to each other when the measurement is repeated [1,4]. The accuracy and precision concepts are illustrated in Table 1 for a system that is measuring a hand position in the real world. Table 1. Just like any measurement system, the joint positions data returned by the NUI ST system has some noise.

There are cases that the ST system does not have enough information in a captured frame to determine a specific joint position. 3.1. 3.2.