background preloader

OpenGL Video Tutorial - Home

OpenGL Video Tutorial - Home

OpenGl - Tutorial 09 : Blending Introduction Blending is commonly used to make objects translucent. To view and understand some blending effects, it requieres some learning on how OpenGl computes Blending. This is a little longer so I put this in a Lesson 3. It is highly recommanded to read it for an accurate understanding of this tutorial. Sample use of Blending In this tutorial, we will see some blending application, the technique part is written in Lesson 3. Make an object translucent Mixing Pictures Filter effect Many other other effects can be created with blending. Translucent object Translucent objects is the common use of Blending. Without blending, when an object is rendered, all pixels drawn replace existing pixels in the frame buffer. The Blending formula defined with glBlendFunc is : srcColor+destColor In case of multiple translucent object, disable the writting in the depth buffer (Lesson 3). You can control how the object is translucent. Translucent object Mixing pictures First Method The alpha value is 0.75. Keys

HOWTO Avoid Being Called a Bozo When Producing XML “There’s just no nice way to say this: Anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool.——Maybe this is unkind and elitist of me, but I think that anyone who either can’t or won’t implement these measures is, as noted above, a bozo.” – Tim Bray, co-editor of the XML 1.0 specification There seem to be developers who think that well-formedness is awfully hard—if not impossible—to get right when producing XML programmatically and developers who can get it right and wonder why the others are so incompetent. I assume no one wants to appear incompetent or to be called names. Therefore, I hope the following list of dos and don’ts helps developers to move from the first group to the latter. Note about the scope of this document: This document focuses on the Unicode layer, the XML 1.0 layer and the Namespaces in XML layer. Contents Don’t think of XML as a text format Don’t use text-based templates Don’t use these systems for producing XML. Don’t print Use NFC

Basic OpenGL Lighting. by Steve Baker Introduction. Many people starting out with OpenGL are confused by the way that OpenGL's built-in lighting works - and consequently how colour functions. I hope to be able to clear up some of the confusion. Lighting ENABLED or DISABLED? The first - and most basic - decision is whether to enable lighting or not. glEnable ( GL_LIGHTING ) ; ...or... glDisable ( GL_LIGHTING ) ; If it's disabled then all polygons, lines and points will be coloured according to the setting of the various forms of the glColor command. glColor3f ( 1.0f, 0.0f, 0.0f ) ; ...gets you a pure red triangle no matter how it is positioned relative to the light source(s). With GL_LIGHTING enabled, we need to specify more about the surface than just it's colour - we also need to know how shiney it is, whether it glows in the dark and whether it scatters light uniformly or in a more directional manner. glMaterial and glLight glColorMaterial The problem with using glMaterial to change polygon colours is three-fold:

Home - CS Animated rosnodejs - Program robots with JavaScript Rosnodejs is currently deprecated. Most of my efforts for JavaScript and robotics has been shifted to the Robot Web Tools project. I highly recommend taking a look at Robot Web Tools if interested in putting your robot on the web. JavaScript interface to ROS functionality 2D tools for mapping and more 3D tools for robot visualization in a 3D environment I still feel a Node.js interface into ROS is important. Rosnodejs is a Node.js module that lets you use JavaScript to interact with the Robot Operating System, an open-source robot framework used by many of the top universities and research programs around the world. Perform a range of robotic tasks, from controlling the motors on an Arduino to processing Kinect sensor data using JavaScript and Node.js. The goal is to make the field of robotics more accessible to the countless intelligent web developers out there. One of the top frameworks to program robots with today is the Robot Operating System (ROS). How to get started with ROS?

Mobile Autonomous Robot using the Kinect Given a priori knowledge of the environment and the goal position, mobile robot navigation refers to the robot’s ability to safely move towards the goal using its knowledge and sensorial information of the surrounding environment. In fact, in mobile robot operating in unstructured environment, the knowledge of the environment is usually absent or partial. Therefore, obstacle detection and avoidance are always mentioned for mobile robot missions. Kinect is not only normal camera sensor but also a special device can provide depth map.Depth map is acquired through OpenNI library then processed by Point Cloud library to extract accurate information about the environment. Here is link of full project: (code + references in English, others in Vietnamese but still good to understand from the source code) Some fun stuffs using kinect are available on my channel.

Kinect + Arduino | Tanner's Website With an Arduino Ethernet, Processing, and a Kinect, I was able to easily create this little demo where hand movement can control a servo. This is just a tiny step in my master plan to create a robot clone so that I don’t have to leave my chair. <p>[Javascript required to view Flash movie, please turn it on and refresh this page]</p> The following libraries and drivers made this work and also made it super easy for me to create it: OpenKinectDaniel Shiffman’s Processing Kinect Library (he knows his stuff and has great examples on his site)Arduino Ethernet UDP send / receive string Servo:EMAX ES08A Servo How it works: The Arduino Ethernet acquires an IP address and waits for UDP packets on a certain port.The machine with the Kinect sends packets to the Arduino that contain hand coordinate data.The Arduino then takes this data (an integer) and maps the range from 0 to 180 degrees.The mapped value is sent to the servo.

Getting Started with Kinect and Processing So, you want to use the Kinect in Processing. Great. This page will serve to document the current state of my Processing Kinect library, with some tips and info. The current state of affairs Since the kinect launched in November 2010, there have been several models released. Kinect 1414: This is the original kinect and works with the library documented on this page in Processing 2.1 Kinect 1473: This looks identical to the 1414, but is an updated model. Now, before you proceed, you could also consider using the SimpleOpenNI library and read Greg Borenstein’s Making Things See book. I’m ready to get started right now What hardware do I need? First you need a “stand-alone” kinect (model 1414 only for now!). Standalone Kinect Sensor If you have a previous kinect that came with an XBox, it will not include the USB adapter. Kinect Sensor Power Supply Um, what is Processing? What if I don’t want to use Processing? ofxKinectKinect CinderBlock More resources from: The OpenKinect Project So now what? 1.

Kinect | Doc-Ok.org I just read an interesting article, a behind-the-scenes look at the infamous “Milo” demo Peter Molyneux did at 2009′s E3 to introduce Project Natal, i.e., Kinect. This article is related to VR in two ways. First, the usual progression of overhyping the capabilities of some new technology and then falling flat on one’s face because not even one’s own developers know what the new technology’s capabilities actually are is something that should be very familiar to anyone working in the VR field. But here’s the quote that really got my interest (emphasis is mine): Others recall worrying about the presentation not being live, and thinking people might assume it was fake. Gee, sounds familiar? With the “Milo” demo, the problem was similar. The take-home message here is that mainstream games are slowly converging towards approaches that have been embodied in proper VR software for a long time now, without really noticing it, and are repeating old mistakes.

Invisible Piano (Keyboard Anywhere, a Kinect Piano) After writing my previous instructable, I was asked about installing some slightly different software to use with the Kinect. Since I'd already done it, I figured it wouldn't take to long to retrace my steps and write the instructable. After much frustration, I figured out a really easy process to get everything installed and talking. This instructable with walk you though getting a virtual keyboard working with the current release (11.04) of Ubuntu. There are other ways of doing this (which I've done in the past), but trying to reaccomplish the task, I found many shortcuts to what I did in the command line previously. If you have any questions on the command line or getting around in Ubuntu, please see my previous instructable. Also, don't transpose anything to terminal between [ ], it's there for reference.

ROS and Kinect- Ubuntu Installation | Project RobotaS ROS- Installation 1.0 Install ROS 1.1 check/add repositories 1.2 Setup your sources.list For Ubuntu 10.10 (Maverick) sudo sh -c ‘echo “deb maverick main” > /etc/apt/sources.list.d/ros-latest.list’ 1.3 Set up your keys wget -O – | sudo apt-key add - 1.4 Installation Make sure you have re-indexed the ROS.org server: sudo apt-get update Desktop-Full Install: (Recommended): ROS, rx, rviz, robot-generic libraries, 2D/3D simulators, navigation and 2D/3D perception sudo apt-get install ros-electric-desktop-full 1.5 Environment setup echo “source /opt/ros/electric/setup.bash” >> ~/.bashrc . ~/.bashrc (change the environment of your current shell, you can type: source /opt/ros/electric/setup.bash) Install Eclipse Applications>Programming>Eclipse Window>Open Perspective>Other…>C/C++ Kinect Installation !!

openni_launch Overview This package contains launch files for using OpenNI-compliant devices such as the Microsoft Kinect in ROS. It creates a nodelet graph to transform raw data from the device driver into point clouds, disparity images, and other products suitable for processing and visualization. Starting with ROS Hydro, all the functionality of openni_launch has been moved to rgbd_launch, in order to allow other drivers such as libfreenect (freenect_launch) to use the same code. openni_launch itself contains 1 launch file: launch/openni.launch - Launch RGB-D processing through rgbd_launch with the OpenNI driver. Quick start Launch the OpenNI driver. roslaunch openni_launch openni.launch To visualize in rviz: rosrun rviz rviz Set the Fixed Frame (top left of rviz window) to /camera_depth_optical_frame. Add a PointCloud2 display, and set the topic to /camera/depth/points. Alternatively you can view the disparity image: rosrun image_view disparity_view image:=/camera/depth/disparity Launch files openni.launch

Getting the Kinect to Work This post is about how I got kinect to work on my machine which is Ubuntu 10.04 Lucid using ROS What didn't Work: What works: Open NI does support the older Xbox 360 sensor, which we had in the lab. I tried it out and it worked perfect (close to). Major steps are outlined as follows 2) Install Open NI drivers using apt-get install ros-fuerte-openni-kinect or follow the directions here. That's it and you are done. roslaunch openni_launch openni.launch . [ INFO] [1339168119.174802439]: Number devices connected: 1[ INFO] [1339168119.174938804]: 1. device on bus 002:21 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'B00367200497042B' If you want to visualize the rgb image, you can use rosrun image_view image_view image:=/camera/rgb/image_color For depth image use rosrun image_view disparity_view image:=/camera/depth_registered/disparity Additionally you can install rviz which is visualization utility with ros using sudo apt-get install ros-fuerte-visualization

Related: