background preloader

Home - PrimeSense

Home - PrimeSense

PrimeSense reveals Capri, 'world's smallest' 3D sensor PrimeSense™ Unveils Capri, World's Smallest 3D Sensing Device at CES 2013 TEL AVIV, Israel, Dec. 11, 2012 /PRNewswire/ -- PrimeSense™ ( the leader in Natural Interaction and 3D sensing solutions, today announced the launch of its next generation embedded 3D sensor, Capri, demonstrating a revolutionary small form factor and low cost. PrimeSense will debut Capri as part of its World of 3D Sensing suite at the Renaissance Hotel in Las Vegas, January 8-11 at the 2013 International Consumer Electronics Show (CES). PrimeSense's breakthrough reference design utilizes Capri - PrimeSense's next-generation of depth acquisition System on Chip, with improved algorithms including multi-modal 3D sensing techniques. "Using cutting-edge technologies, our newest generation of sensors is robust, accurate and affordable," said Inon Beracha, CEO, PrimeSense.

dp.kinect | Dale Phurrough Cycling 74 Max external using Microsoft Kinect SDK dp.kinect is an external which can be used within the Cycling ’74 Max development environment to control and receive data from your Microsoft Kinect. Setup and usage docs are available at It is based on the official Microsoft Kinect platform. You will need at least v1.5.2 of the Kinect runtime/drivers to use this external.Extensive features including: face tracking, sound tracking, speech recognition, skeletal tracking, depth images, color images, point clouds, accelerometer, multiple Kinects, and more…It was primarily developed and tested against Max 6.x for 32 and 64-bit platforms. It has only been casually tested against Max 5.1.9.There are the same inlets and outlets as compared to my other external jit.openni. Known Issues Licensing and Terms of Use The dp.kinect software is free for evaluation and non-commercial use. This software comes with ABSOLUTELY NO WARRANTY. Download

www.grasp3d.com Maxwell: The Anatomical Head Phantom » Daniel Stough | ENGINEER Introduction To start off, I should answer some of the questions that you are most likely asking yourself after having read the box above: Q: What the heck is a phantom?A: A phantom is a bottle of fluid used to perform testing on an MRI scanner. The fluid can be a number of different solutions, based on the desired effect and test parameters. Q: Why do you need a phantom? Q: Why is this phantom so special? Hopefully you now understand the motivations of the project: I didn’t want to be a lab rat and I didn’t think anyone should have to be one either. There are other groups that are working on similar projects, like adding several different compartments for different tissues in the head. Look at my MRI Primer [FORTHCOMING], that gives a short background about my research in the Bioengineering Department at Pitt, for more information. Reverse Engineering How do you take a list of points and turn them into a solid shape? This was the most difficult task in the creation of Maxwell.

EASY Kinect 3D Scanner! Hey Instructables community! In this instructable I will instruct you how to make a DIY 3d scanner using an XBOX-360 Kinect! This instructable is very easy as long as you are patient and follow my instructions in the video. The links for the downloads from the video: GPU update from Nvidia: I am entering this contest to help others learn how to easily 3D scan items or people using the Kinect. God bless. Technology Inc. - Microchip’s New GestIC® Technology Enables Mobile-Friendly 3D Gesture Interfaces Images High-res Photo Available Through Flickr or Editorial Contact (feel free to publish): Download Hi-Res High-res Block Diagram Available Through Flickr or Editorial Contact (feel free to publish): Download Hi-Res Video Watch a short video demonstration: Watch / Link / Embed Video CHANDLER, Ariz., Nov. 13, 2012 [NASDAQ: MCHP] — Microchip Technology Inc., a leading provider of microcontroller, analog and Flash-IP solutions, today announced its patented GestIC® technology, which enables the next dimension in intuitive, gesture-based, non-contact user interfaces for a broad range of end products. Watch a short video demonstration: View a brief presentation: With power consumption as low as 150 microwatts in its active sensing state, the MGC3130 enables always-on 3D gesture recognition—even for battery-powered products where power budgets are extremely tight. Development Support Additional Images (Click to enlarge)

New Kinect Gets Closer to Your Body [Videos, Links] The new, svelte-looking Kinect. It’s not that it looks better, though, that matters: it’s that it sees better. Courtesy Microsoft. It’s a new world for media artists, one in which we look to the latest game console news because it impacts our art-making tools. And so it is that, along with a new Xbox, Microsoft has a new Kinect. The new Kinect uses standard infrared tracking (ideal for in-the-dark footage and accurate tracking), but also returns RGB imagery. The big news is tracking that gets closer to your body, breaking analysis into smaller bits. The original sensor mapped people in a room using “structured light”: It would send out infrared light, then measure deformities in the room’s surfaces to generate a 3-D depth map. Xbox One Revealed [Wired.com] Say what? The upshot to all of this is better tracking: More discrete people can be tracked independently, without having to add more Kinects (as some hackers did) – up to six, says Microsoft. What Will Hackers Do with the New Kinect?

Optics and Lasers in Engineering - Holovideo: Real-time 3D range video encoding and decoding on GPU 1. Introduction 2. Principle 3. 4. 5. Acknowledgments Appendix A. References Abstract We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Copyright © 2011 Elsevier Ltd. Artreat: A Multiscale Model For Prediction Of Atherosclerosis Progression An interview with the Coordinator, Prof Oberdan Parodi, will talk us through the project. Prof Parodi, could you please describe in few words what atherosclerosis is? Atherosclerosis is a disease of the arterial blood vessels (arteries), in which the walls of the blood vessels become thickened and hardened by "plaques." What is the main goals ARTreat focused on? ARTreat aimed to develop a multiscale and predictive model, which integrates: 3D image reconstruction,blood flow modeling,modeling of the initiation and progression of the plaque and plaque characterization. Around this three-level multiscale model, a treatment decision support system and training services have been also developed. What are the key factors success of the approach proposed by the project? The early detection and prediction of the progression of atherosclerosis are the crucial requirements towards improved treatment, and reduction in mortality and morbidity. Has the ARTreat system been tested in a real clinical setting?

Virtual Fitting Room » Case studies For REAL statistics, REAL measures of performance and REAL opinions on the Fits.me Virtual Fitting Room, read the case studies below. There’s no need to take our word for it when there are others to speak for us. Henri Lloyd case study – October 2013 “The stats we’re getting are overwhelming evidence that our virtual fitting room….and as a result it has smashed the overall returns rate.”Graham Allen Click on image to download Thomas Pink case study – April 2013 “For any optional button to get a click-through rate of almost 20% is pretty impressive.”Nadine Sharara Hawes & Curtis case study – September 2012 “Have we seen improved conversion rates with Fits.me? Pretty Green case study – September 2012 “And 79% of all virtual fitting room users reported that either they wouldn’t buy without the fitting room or that they would have ordered the wrong size – or even sizes, guaranteeing that we would get returns.”Tim Kalic Benefits of a fitting room

CEVA Gesture Recognition solutions for mobile and home entertainment devices eyeSight is a leader in Touch Free Interfaces for digital devices. The company was established with the vision to revolutionize the way people interact with digital devices, to create an interaction which is both simple and intuitive. eyeSight’s solution is based on advanced image processing and machine vision algorithms, which analyze real time video input from common built-in cameras. CEVA DSP Cores Supported Partnering in Market Solutions Mobile Phones Tablets Portable media players, game consoles TV media center and smart home controllers Product Offering eyeSight’s Technology powers Touch Free UI Solutions which enhance the User Experience while operating devices and applications.

therenect - A virtual Theremin for the Kinect Controller The Therenect is a virtual Theremin for the Kinect controller. It defines two virtual antenna points, which allow controlling the pitch and volume of a simple oscillator. The distance to these points can be adjusted by freely moving the hand in three dimensions or by reshaping the hand, which allows gestures that should be quite similar to playing an actual Theremin. This video was recorded prior to this release, an updated video introducing the improved features of the current version will follow soon. Configuration Oscillator: Theremin, Sinewave, Sawtooth, Squarewave Tonality: continuos mode or Chromatic, Ionian and Pentatonic scales MIDI: optionally send MIDI note on/off events to the selected device & channel Kinect: adjust the sensor camera angle AcknowledgmentsThis application has been created by Martin Kaltenbrunner at the Interface Culture Lab.

PrimeSense’s depth acquisition is enabled by "light coding" technology. The process codes the scene with near-IR light, light that returns distorted depending upon where things are. The solution then uses a standard off-the-shelf CMOS image sensor to read the coded light back from the scene using various algorithms to triangulate and extract the 3D data. The product analyses scenery in 3 dimensions with software, so that devices can interact with users by userexperience Jan 23

PrimeSense System on a Chip (SoC)

The CMOS image sensor works with the visible video sensor to enable the depth map provided by PrimeSense SoC’s Carmine (PS1080) and Capri (PS1200) to be merged with the color image. The SoCs perform a registration process so the color image (RGB) and depth (D) information is aligned properly.[8] The light coding infrared patterns are deciphered in order to produce a VGA size depth image of a scene. It delivers visible video, depth, and audio information in a synchronized fashion via the USB 2.0 interface. The SoC has minimal CPU requirements as all depth acquisition algorithms run on the SoC itself. by userexperience Jan 23

Related: