background preloader

3d scanning Kinect

Facebook Twitter

Technology - Stereolabs. SR4500. Low-Cost Depth Cameras (aka Ranging Cameras or RGB-D Cameras) to Emerge in 2010? Recent Advances in Depth Cameras A natural application for depth cameras is Simultaneous Localization and Mapping (SLAM).

Low-Cost Depth Cameras (aka Ranging Cameras or RGB-D Cameras) to Emerge in 2010?

Dieter Fox's work in building full 3D indoor maps using depth cameras is quite impressive. The video below shows the map creation process with the "camera" view in the upper-left and the depth image in the lower-left. A map fly-through conjures images of a Google Street View indoors. Indeed, the resulting 3D map contains a lot of information! A lot of researchers are ruminating that these sensors could revolutionize robotic perception, something akin to the impact of laser rangefinders -- after all, they essentially combine the functionality of cameras and laser rangefinders into one convenient device.

Brief Overview of Depth Camera Systems Literally dozens of ranging / depth cameras have been developed. I have no experience with any of these depth cameras, but would love to hear others' impressions. The Dieter Fox Connection. Kinect-based robotic mapping puts autonomy back on the menu (with video) Right now, newborn robots are a bit like mosquitoes.

Kinect-based robotic mapping puts autonomy back on the menu (with video)

A brain as tiny as a mosquito’s doesn’t really understand the world as you and I do; it doesn’t hold a conceptual model of its environment for navigation and planning. Rather, it responds to stimuli according to a few very simple rules: CO2 in the air means approach in a zig-zag pattern, sudden air pressure from one side means dive and evade orthogonally.

Any map of the insect’s surroundings is fleeting and ultimately not that useful. Keeping track of the world is a complex proposition. The only way roboticists have found around that problem is to load up robots with precompiled maps of relevant locations; when iRobot introduced Ava500, the company made sure to point out that the telepresence machine will need to be loaded with detailed maps of the offices it is expected to navigate. New research out of MIT looks to change that, using extremely low-cost hardware to achieve advanced, potentially revolutionary goals. Kinect Fusion. Kinect for Windows 1.7, 1.8 KinectFusion provides 3D object scanning and model creation using a Kinect for Windows sensor.

Kinect Fusion

The user can paint a scene with the Kinect camera and simultaneously see, and interact with, a detailed 3D model of the scene. Kinect Fusion can be run at interactive rates on supported GPUs, and can run at non-interactive rates on a variety of hardware. Running at non-interactive rates may allow larger volume reconstructions. Figure 1. Ensure you have compatible hardware (see Tech Specs section below). Kinect Fusion can process data either on a DirectX 11 compatible GPU with C++ AMP, or on the CPU, by setting the reconstruction processor type during reconstruction volume creation. Minimum Hardware Requirements for GPU based reconstruction The minimum hardware requirement for video cards has not been specifically tested for Kinect Fusion 1.8. Recommended Hardware Processing Pipeline Figure 2. The first stage is depth map conversion. 3D Surface Reconstruction.

3D mapping of rooms, again. Kintinuous: Spatially Extended KinectFusion. Download:PDF “Kintinuous: Spatially Extended KinectFusion” by T.

Kintinuous: Spatially Extended KinectFusion

Whelan, M. Kaess, M.F. Fallon, H. Johannsson, J.J. In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. BibTeX entry: DSpace@MIT Technical Report: Skanect by Occipital. Matterport 3D models of real interior spaces. How It Works. With KScan3D, you can quickly and easily scan, edit, process, and export data for use with your favorite 3D modeling software.

How It Works

Here's how it works: Kinect and Xtion sensors gather data. KScan3D converts this data into a 3D mesh. Capture data from multiple angles to create a complete 360 degree mesh. KScan3D can capture and align 3D meshes automatically. Once you've captured the data you need, you can use KScan3D to delete unneeded points, smooth data, and more. KScan3D also allows you to combine and finalize meshes with a number of customizable settings to help you get the results you need. You can export final meshes in .fbx, .obj, .stl, .ply, and .asc formats for use with your favorite 3D modeling software. Use exported meshes for visual effects, games, CAD / CAM, 3D printing, online / web visualization, and other applications.

Visit our Gallery to view and download sample scans we've captured in-house with KScan3D.