ReconstructMe and Realistic 3D Scans | 3D Printing at UMW May 10, 2012 in Scanning by Tim Owens I’ve been playing recently with a great new piece of software called ReconstructMe which is free for now and Windows only. It uses the Xbox Kinect (an incredibly worthwhile investment for 3D scanning) to create 3D models. The greatest thing about this software is it stitches together scans in realtime so 360 degree scanning is easier than it has ever been before and the results are stunning. For this print I sat in a rolling chair with the Kinect mounted on a tripod faced at an angle and slowly turned 360 degrees to build it. I then used Tony Busers incredible video on cleaning up 3D scans to patch the holes, smooth out the rough areas, decrease the polygon count, and get a nice flat cut on the bottom to print on. In many ways it feels like the advances being made in this field are so incredibly fast moving that it’s hard to keep up.
3D Scan 2.0 Scanning The only things you need are our framework, the Kinect and some AR markers. Therefor we constructed some Scan Tablets Point Cloud After scanning the object you get a colored point cloud. See more point clouds in our 3D Gallery ! Reconstruction Using a Poisson Surface Reconstruction algorithm you get a mesh which is then colored with our color mapping algorithm. Read more about the Poisson Surface Reconstruction. Quick Start » Get ready to scan: First step: Download our framwork Second step: Read the instructions to compile the framework. » Any Questions? Check the Documentation.
PCL - Point Cloud Library Tutorials for Kinect Programming Using Kinfu Large Scale to generate a textured mesh This tutorial demonstrates how to use KinFu Large Scale to produce a mesh (in meters) from a room, and apply texture information in post-processing for a more appealing visual result. The first part of this tutorial shows how to obtain the TSDF cloud from KinFu Large Scale. The second part shows how to convert the TSDF cloud into a uniform mesh. The third part shows how to texture the obtained mesh using the RGB images and poses we obtained from KinFu Large Scale. TSDF Cloud This section describes the TSDF Cloud, which is the expected output of KinFu Large Scale. You may be wondering: “What is the difference between a TSDF cloud and a normal point cloud?” Figure 1: The cube is subdivided into a set of Voxels. As you may already know, the way in which the TSDF volume is stored in GPU is a voxel grid. At the time of data extraction, the grid is traversed from front to back, and the TSDF values are checked for each voxel. Figure 2: A representation of the TSDF Volume grid in the GPU. $ . $ .
MeshLab Kinect Fusion Coming to Kinect for Windows - Kinect for Windows Blog Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion. Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K. As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release. In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments. Key Links
Michael Kaess @ CSAIL Here are some of my current and past projects at MIT. For my earlier work, please see my Georgia Tech web page. Kintinuous: Live Dense Mapping Kintinuous provides live dense 3D mapping of extended environments using RGB-D sensors such as Microsoft's Kinect. Unlike the original KinectFusion algorithm, we are not restricted to a small volume by the available memory on the GPU. Building-Scale Lifelong Mapping In recent work we created a map of our ten floor building, the Stata Center. iSAM2 and the Bayes Tree Exploring the connection of sparse linear algebra and graphical models yielded new insights into the probabilistic interpretation of matrix factorization, formalized in the Bayes tree (WAFR10 paper). Ship Hull Inspection We integrated localization and mapping with closed loop control of an autonomous underwater vehicle for in-water inspection of large ships. Visual SLAM All Source Positioning and Navigation (ASPN) Cooperative Mapping and Localization
Matterport 3D camera aims to map your interior world, display it from the cloud March 07, 2013 08:00 AM Eastern Time Matterport Unveils $5.6 Million Funding to 3D Capture Everything, Even the Kitchen Sink MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--Matterport, the leader in rapid 3D scanning of spaces and objects, announced today that it raised a $5.6 million Series A financing round to create immersive experiences with interior spaces, starting with your home. "We welcome the participation of these investors," said Matt Bell, inventor of the camera and Matterport co-founder. "Matterport makes the extraordinary look easy," said Peter Hebert, Co-Founder and Managing Partner of Lux Capital. Matterport will soon launch the first-ever 3D camera and interactive viewing platform – allowing its community to create virtual models of any indoor space and access the resulting 3D image from a web browser or iPad – anytime, anywhere. Matterport's cloud engine changes the way we understand and view spaces by allowing users to post and share them with the world.
Kinect-carrying drone automatically builds 3D maps of rooms Google Street View eat your heart out: An MIT-built quadrocopter uses Microsoft Kinect, and some smart odometry algorithms, to fly around a room and spit out a 3D map of the environment. The drone itself is a lightweight UAV with four rotors, an antenna, protective and stabilising shielding and the guts of a Kinect sensor. It communicates with a nearby laptop, but all sensing and computation to determine position is done onboard an internal computer. The quadrocopter runs an MIT-developed real-time visual odometry algorithm to work out location and velocity estimates, which helps stabilise the vehicle during its fully autonomous 3D flight. Most small drones use GPS or pre-coded information about the area to avoid bumping into things. Odometry is the process of using data from some kind of sensor to figure out position, like measuring how far a robot's legs or wheels have moved to determine how far it has travelled.
rgbdslam electric: Cannot load information on name: rgbdslam, distro: electric, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.fuerte: Documentation generated on December 26, 2012 at 03:53 PMgroovy: Cannot load information on name: rgbdslam, distro: groovy, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.hydro: Cannot load information on name: rgbdslam, distro: hydro, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.indigo: Cannot load information on name: rgbdslam, distro: indigo, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index. Cannot load information on name: rgbdslam, distro: electric, which means that it is not yet in our index. rosdep update rosdep install rgbdslam_freiburg