background preloader


Facebook Twitter

Documentation - Point Cloud Library (PCL) This tutorial demonstrates how to use KinFu Large Scale to produce a mesh (in meters) from a room, and apply texture information in post-processing for a more appealing visual result.

Documentation - Point Cloud Library (PCL)

The first part of this tutorial shows how to obtain the TSDF cloud from KinFu Large Scale. The second part shows how to convert the TSDF cloud into a uniform mesh. The third part shows how to texture the obtained mesh using the RGB images and poses we obtained from KinFu Large Scale. TSDF Cloud This section describes the TSDF Cloud, which is the expected output of KinFu Large Scale. You may be wondering: “What is the difference between a TSDF cloud and a normal point cloud?” Figure 1: The cube is subdivided into a set of Voxels. As you may already know, the way in which the TSDF volume is stored in GPU is a voxel grid. At the time of data extraction, the grid is traversed from front to back, and the TSDF values are checked for each voxel. Figure 2: A representation of the TSDF Volume grid in the GPU. ROS-Users - Re: How to connect the transform between /robot_base and /map ? (Ming Liu)

ROS-Users - Subscribing to PR2 Base Odometry. Pretty much, here's a code that compiles#include #include void odomCB(const nav_msgs::Odometry::ConstPtr& msg){ double xPos=msg->pose.pose.position.x; double yPos=msg->pose.pose.position.y; //get Quaternion anglular information double x=msg->pose.pose.orientation.x; double y=msg->pose.pose.orientation.y; double z=msg->pose.pose.orientation.z; double w=msg->pose.pose.orientation.w; //convert to pitch double angle=atan2(2*(y*x+w*z),w*w+x*x-y*y-z*z); ROS_ERROR("%f %f %f",xPos,yPos,angle);}; On Wed, Mar 31, 2010 at 5:40 PM, Sebastian Castro <[hidden email]> wrote: Is there any way to subscribe to the pose of the base of a simulated PR2 robot?

ROS-Users - Subscribing to PR2 Base Odometry

_______________________________________________ ros-users mailing list [hidden email] Turtlebot_navigation/Tutorials/Autonomously navigate in a known map. Description: This tutorial describes how to use the TurtleBot with a previously known map.

turtlebot_navigation/Tutorials/Autonomously navigate in a known map

Tutorial Level: BEGINNER Prior Setup This tutorial assumes you have a map of your work area setup. Such as the one generated by the previous tutorial. This assumes that you have a TurtleBot which has already been brought up in the turtlebot bringup tutorials. Launch the amcl app On the TurtleBot Run the navigation demo app passing in your generated map file. roslaunch turtlebot_navigation amcl_demo.launch map_file:=/tmp/my_map.yaml On your Workstation This assumes you have ROS on your workstation and ROS_MASTER_URI has been set to point to your turtlebot. Launch rviz: # Pre-Groovy rosrun rviz rviz -d `rospack find turtlebot_navigation`/nav_rviz.vcg # Groovy or later roslaunch turtlebot_rviz_launchers view_navigation.launch In RVIZ Localize the TurtleBot When starting up, the TurtleBot does not know where it is. You will see a collection of arrows which are hypotheses of the position of the TurtleBot.

OctoMap - 3D occupancy mapping. Wurm10octomap. Rgbdslam. Electric: Cannot load information on name: rgbdslam, distro: electric, which means that it is not yet in our index.


Please see this page for information on how to submit your repository to our index.fuerte: Documentation generated on December 26, 2012 at 03:53 PMgroovy: Cannot load information on name: rgbdslam, distro: groovy, which means that it is not yet in our index. Please see this page for information on how to submit your repository to our index.hydro: Cannot load information on name: rgbdslam, distro: hydro, which means that it is not yet in our index. 3D model representation. 3D mapping with Kinect style depth camera.