Joseph Azzam's Blog - The Workflows of Creating Game Ready Textures and Assets using Photogrammetry. The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. A couple of month ago I wrote an article on how I was able to scan an entire castle for $70, and at the end I mentioned how I was porting that castle into the Unreal Engine to use in my game World Void. Working with a huge structure with millions of polygons can bring the most powerful software solution to its knees, and a lot of people have showed interest in the creative process that I am using to port the castle into a game engine. Making the castle game ready is a very tedious and long process, and before I can explain the work that goes into splitting the castle into game assets, I found it essential to first explain the challenges of manipulating 3D scans in general.
For these results it’s worth noting my computer specifications. Retopology and Mesh Editing Tools Instant Meshes ZBrush. Realtime 3D Face Scanning. 3D Printering: Scanning 3D models. Building your own 3D scanner out of off-the-shelf parts. So you have an animation and simulation project, and want to scan people to get their high-resolution 3D meshes. At that point you have wide variety of different options based on different capture technologies, big bulky machines, hand-held devices and scanners that can take quite a while to obtain a scan. So what do you do when you don’t necessarily have a lot of permanent space, when you ideally want to obtain a scan in a single shot, and you want to do all that on a budget? Well, you build your own solution of course. You visit your local camera vendor, and empty their basket of 64 Canon Powershot A1400 cameras, which should suffice to build 8 portable 3D scanning poles.
Throw in a significant amount of USB cables, add some USB hubs, grab some wood and hardware from your local DIY store, add some bright LED strip for lighting, and you have all the ingredients necessary to build your own scanner. The construction: And there you have it. The first results: Automate your Meshlab workflow with MLX filter scripts. Meshlab is a great program for loading and editing XYZ point cloud data and creating polygon meshes.
It also does a good job as a 3D file format converter. After you start using Meshlab for awhile you will typically use the same filter settings over and over again for every project. Meshlab allows you to automate your workflow by creating your own Meshlab .MLX filter scripts. These filter scripts are in XML format and can be run from the Meshlab GUI or from the command line version of Meshlab called meshlabserver. If you run into trouble using Meshlab there is a SourceForge discussion forum that is quite helpful. I discovered Meshlab when I started processing aerial images with Bundler / PMVS and needed an opensource program to edit Stanford Triangle Format .PLY files . This blog post is probably one of the most hyperniche topics I have written about. Creating a Meshlab MLX Filter Script You can edit a Meshlab filter by selecting the Show Current Filter Script menu item.
Example command: ReconstructMe and Realistic 3D Scans | 3D Printing at UMW. May 10, 2012 in Scanning by Tim Owens I’ve been playing recently with a great new piece of software called ReconstructMe which is free for now and Windows only. It uses the Xbox Kinect (an incredibly worthwhile investment for 3D scanning) to create 3D models. The greatest thing about this software is it stitches together scans in realtime so 360 degree scanning is easier than it has ever been before and the results are stunning. For this print I sat in a rolling chair with the Kinect mounted on a tripod faced at an angle and slowly turned 360 degrees to build it. In many ways it feels like the advances being made in this field are so incredibly fast moving that it’s hard to keep up.
Testing ReconstructMe - how to use Kinect to capture 3D models in real-time. Skanect by Manctl | 3D Scanning Made Easy. Demonstration of RGB Demo0.7 on UBuntu12.04 - 3d reconstruction using kinect. Installing Libfreenect Installing Open-NI, Sensor kinect, Avin2Sensor patch and NITE ( Note : only 1.5xx versions work with RGB demo source not the github) Installing PCL-1.6 Installing OpenCV-2.3.1 Installing RGB demo Installing QT Note: PCL-1.6, OpenCV2.3.1 and QT libraries should be downloaded from Ubuntu repositories, otherwise there will be a conflict Getting started with OpenKinect (Libfreenect) Here are the list of Ubuntu Kinect Installation commands.
Install the following dependencies.The first line in each of the listings below takes care of this step: git-core cmake libglut3-dev pkg-config build-essential Install the dependencies by using the following commands sudo apt-get install git-core cmake libglut3-dev pkg-config build-essential libxmu-dev libxi-dev libusb-1.0-0-dev git clone cd libfreenect mkdir build cd build cmake .. make sudo make install sudo ldconfig /usr/local/lib64/ sudo glview For ubuntu 12.04.01, freeglut* works $ . DSLR + DEPTH Filmmaking | Home. SLAMDemo - rtabmap - Demo of 6DoF RGBD SLAM using RTAB-Map - RTAB-Map : Real-Time Appearance-Based Mapping. In this example, we use a hand-held Kinect sensor: we assemble incrementally the map depending on the 6D visual odometry, then when a loop closure is detected using RTAB-Map, TORO is used to optimize the map with the new constraint.
We did two loops in the IntRoLab robotic laboratory of the 3IT . The first image below is the final map without optimizations. The second image is the final map with TORO optimizations with RTAB-Map detected constraints. The third and fourth images are closer examples of before and after map optimizations. The ROS bag for the demo is here : kinect_720.bag . This only works with ROS, tested under Ubuntu 12.04 / ROS Fuerte (edit: work also on Groovy). Here the full steps to get all installed (in your "ROS_PACKAGE_PATH"). To try the demo, type: $ rosbag play -- clock kinect_720 . bag ( visual_slam_demo.launch ) To try with a Kinect (with freenect or openni stacks respectively): ( visual_slam_freenect.launch ) ( visual_slam_openni.launch )
Www.decom.ufop.br/sibgrapi2012/eproceedings/tutorials/t4-handouts.pdf. Rendering results with Meshlab | hackengineer. Posted: 26th February 2012 by hackengineer in Computer Vision Tags: 3D , meshlab , point cloud Meshlab is pretty great for 3D point clouds and its free! Here are a few steps that help really make the point clouds look good.
Open Meshlab and open the .xyz file from the 3D camera Delete any points that look like they dont belong ( if you only see a small group of points you are probably zoomed out really far dude to a rogue point; delete it and zoom in) Orient the pointcloud so that it represents the original scene (looking straight at it). We will now compute the normals for each point. Filters->point Set->Compute Normals For Point Sets # of neighbors = 100 Check flip normals w.r.t viewpoint Render->Lighting->Light On The points should now have a shading effect depending on there normal. To verify render->Show Vertex Normals Lets add some color to the depth to make it stand out (darker as depth increases).
We should have a good looking pointcloud at this point. And there you have it! 3D Scanner. KinectFusion - Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera [28C3] by CCCen 21,171 views KinectFusion Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera This project investigates techniques to track the 6DOF position of handheld depth sensing cameras, such as Kinect, as they move through space and perform high quality 3D surface reconstructions for interaction. While depth cameras are not conceptually new, Kinect has made such sensors accessible to all. The quality of the depth sensing, given the low-cost and real-time nature of the device, is compelling, and has made the sensor instantly popular with researchers and enthusiasts alike. The Kinect camera uses a structured light technique to generate real-time depth maps containing discrete range measurements of the physical scene.
SCENECT 5.1 Tutorial MeshLab Create a mesh for 3D printing. 123D Scanner - Home made 3D Scanner. Hey - have a look at my new project HERE In this project I built a 3D Scanner, that enables generating 3D models of physical objects. The files can later be viewed in 3D software (GLC Player, Sketchup, Rhino, or sites such as and even manipulated into .STL file and 3D printed. The software for this project is completely free, I am using Autodesk's 123D catch, Link:123D catch The 123D Catch is a great software, it requires taking many photos of an object all around it, and uploading it into the software, and it returns a 3D file.
Since I really liked the solution but did not wanted to take the photos myself - I built an instrument that does that - description hence. Please note that this document does not intend to explain how to use 123D catch (this can be found here) VirtuZoom Microscope 3D-Scanner by virtumake. Www.diy3dscan.com. Mesh.brown.edu/dlanman/research/3DIM07/lanman-SurroundLighting.pdf. Update: Realtime 3D for you too! 3D Printed Photograph. All of these 3D models were generated algorithmically from Processing using the ModelBuilder library by Marius Watz. This library allows you to save 3D geometries in the STL file format, STL files that form a watertight mesh can be printed by a 3D printer. To get started using this code yourself, download the latest version of the ModelBuilder library, unzip the file, and copy the folder into Processing's "libraries" folder.
If you have installed the predecessor to the ModelBuilder library (called the Unlekker library), you will need to delete it. Once this is done restart Processing. Download the latest version of the Processing sketch from GitHub (download as a zip by clicking on the cloud button). To run the sketch, replace the part in quotes in following line: String name = "your_file_name_here"; with the name of your greyscale image. Guest blog Deepak Mehta: (123D)Catch it if you can! Recently Autodesk released 123D Catch for the iPhone. But how useful is the app from a 3D printing perspective?
Deepak Mehta, a technology evangelist for 3D printing, takes it to the test and tells about his experiences. Let me start with a short introduction of the functionality of 123D Catch for the iPhone. This is basically a front-end interface to the cloud service by Autodesk, which is also used by the Windows and the iPad version. For good results you need to take care of several things: 1) Homogenous lighting: don’t turn on more lights during the shoot and avoid shadows darkening the shot… The more light, the better the colors and details come out. 2) Try to impale the object on a stick, so that you can even capture the bottom part of the object, else you will get an object stuck to the bottom, which will lead to a flat bottom. 3) Don’t move the object during the shoot. 4) Try to fill the object in every photograph you take. 6) Make sure the object is always in focus.