background preloader

Www.diy3dscan.com

Www.diy3dscan.com
Related:  3D scanning

123D Scanner - Home made 3D Scanner Hey - have a look at my new project HERE In this project I built a 3D Scanner, that enables generating 3D models of physical objects. The files can later be viewed in 3D software (GLC Player, Sketchup, Rhino, or sites such as and even manipulated into .STL file and 3D printed. The software for this project is completely free, I am using Autodesk's 123D catch, Link:123D catch The 123D Catch is a great software, it requires taking many photos of an object all around it, and uploading it into the software, and it returns a 3D file. Since I really liked the solution but did not wanted to take the photos myself - I built an instrument that does that - description hence. Please note that this document does not intend to explain how to use 123D catch (this can be found here)

3D Printing Basics | Beginner's guide | 3D printers Table of contents: 1. What is 3D printing? 3D printing is also known as desktop fabrication or additive manufacturing. It is a prototyping process whereby a real object is created from a 3D design. The digital 3D-model is saved in STL format and then sent to a 3D printer. 2. 3D printing technologies There are several different 3D printing technologies. SLS (selective laser sintering), FDM (fused deposition modeling) & SLA (stereolithograhpy) are the most widely used technologies for 3D printing. This video describes how laser-sintering processes melt fine powders, bit by bit, into 3D shapes. This video shows how FDM works. The video below explains the process of Stereolithography (SLA). Generally, the main considerations are speed, cost of the printed prototype, cost of the 3D printer, choice and cost of materials and color capabilities. 3. October 5, 2011 - Roland DG Corporation introduced the new iModela iM-01. This smallest 3D printer weighs 1.5 kilograms, it costs around 1200 Euros. 5. 6.

3D Printed Photograph All of these 3D models were generated algorithmically from Processing using the ModelBuilder library by Marius Watz. This library allows you to save 3D geometries in the STL file format, STL files that form a watertight mesh can be printed by a 3D printer. To get started using this code yourself, download the latest version of the ModelBuilder library, unzip the file, and copy the folder into Processing's "libraries" folder. If you have installed the predecessor to the ModelBuilder library (called the Unlekker library), you will need to delete it. Once this is done restart Processing. Download the latest version of the Processing sketch from GitHub (download as a zip by clicking on the cloud button). To run the sketch, replace the part in quotes in following line: String name = "your_file_name_here"; with the name of your greyscale image.

Improve 3d scanning with ReconstructMe (Page 1) — Solidoodle Discussion — SoliForum - 3D Printing Community I learned about ReconstructMe at the Pittsburgh Mini Maker Faire this weekend and had to try it out! It took about one full night to download all the right drivers and software, but it wasn't very tough and worked on the first try. This is so much less clunky and more accurate than 123d catch that there seems to really be some potential here. I was wondering if anyone else has messed around with improving the resolution and or using hacks such as putting a reading lense in front of the kinect camera. For those who don't know, it is a program that utilizes a kinect to take a 3d scan and is done in one continuous rotation of the object instead of taking 40 or 50 pictures. First attempt picture is attached. Post's attachments

3D Scanner KinectFusion - Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera [28C3] by CCCen 21,171 views KinectFusion Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera This project investigates techniques to track the 6DOF position of handheld depth sensing cameras, such as Kinect, as they move through space and perform high quality 3D surface reconstructions for interaction. While depth cameras are not conceptually new, Kinect has made such sensors accessible to all. Speaker: David Kim EventID: 4928 Event: 28th Chaos Communication Congress (28C3) by the Chaos Computer Club [CCC] Location: Berlin Congress Center [bcc]; Alexanderstr. 11; 10178 Berlin; Germany Language: english Start: 29.12.2011 15:15:00 +01:00 License: CC-by-nc-sa

FABFUSE Slides on slideshare: Welcome to FabFuse Etherpad Lite! This pad text is synchronized as you type, so that everyone viewing this page sees the same text. After the sessions, the notes will be uploaded to the FabFuse website Ramun Berger Low costs 3D scanning Focus on taking pictures with camera or kinect some systems decent models of small things software is free (as in beer), closed source a toy, doesn't really work creating point clouds with Kinect scanning about 5 min, 30 minutes uploading AutoDesk 123d apps family: 123D, 123D Catch, 123D Sculpt, 123D Make you take about 20-50 pictures of a model. sometimes nice results, sometimes crap between 2 min and 2 hours time to construct it. you get a mail when it is ready closed sourced , windows only for scanning big objects. from 10 cm to 2 cars webcam with a line-laser and a turntable we didn't get the prototype running scan small objects. one button Open Kinect or OpenFrameworks.

Automate your Meshlab workflow with MLX filter scripts Meshlab is a great program for loading and editing XYZ point cloud data and creating polygon meshes. It also does a good job as a 3D file format converter. After you start using Meshlab for awhile you will typically use the same filter settings over and over again for every project. Meshlab allows you to automate your workflow by creating your own Meshlab .MLX filter scripts. These filter scripts are in XML format and can be run from the Meshlab GUI or from the command line version of Meshlab called meshlabserver. If you run into trouble using Meshlab there is a SourceForge discussion forum that is quite helpful. I discovered Meshlab when I started processing aerial images with Bundler / PMVS and needed an opensource program to edit Stanford Triangle Format .PLY files . This blog post is probably one of the most hyperniche topics I have written about. Creating a Meshlab MLX Filter Script You can edit a Meshlab filter by selecting the Show Current Filter Script menu item. Example command:

Rendering results with Meshlab | hackengineer Posted: 26th February 2012 by hackengineer in Computer Vision Tags: 3D , meshlab , point cloud Meshlab is pretty great for 3D point clouds and its free! Here are a few steps that help really make the point clouds look good. Open Meshlab and open the .xyz file from the 3D camera Delete any points that look like they dont belong ( if you only see a small group of points you are probably zoomed out really far dude to a rogue point; delete it and zoom in) Orient the pointcloud so that it represents the original scene (looking straight at it). Lets add some color to the depth to make it stand out (darker as depth increases). We should have a good looking pointcloud at this point. And there you have it! <–Previous Project Home Be Sociable, Share!

SLAMDemo - rtabmap - Demo of 6DoF RGBD SLAM using RTAB-Map - RTAB-Map : Real-Time Appearance-Based Mapping In this example, we use a hand-held Kinect sensor: we assemble incrementally the map depending on the 6D visual odometry, then when a loop closure is detected using RTAB-Map, TORO is used to optimize the map with the new constraint. We did two loops in the IntRoLab robotic laboratory of the 3IT . The first image below is the final map without optimizations. The second image is the final map with TORO optimizations with RTAB-Map detected constraints. The ROS bag for the demo is here : kinect_720.bag . This only works with ROS, tested under Ubuntu 12.04 / ROS Fuerte (edit: work also on Groovy). $ svn checkout http : //utilite.googlecode.com/svn/trunk/ros-pkg utilite $ svn checkout http : //rtabmap.googlecode.com/svn/tags/6dSLAMdemo/ros-pkg rtabmap $ svn checkout http : //rtabmap.googlecode.com/svn/tags/6dSLAMdemo/visual_slam visual_slam $ rosdep update $ rosdep install visual_slam $ rosmake visual_slam To try the demo, type: $ rosbag play -- clock kinect_720 . bag ( visual_slam_demo.launch )

Related: