Air Hockey Robot (a 3D printer hack) After the last B-ROBOT project, this is what I’ve been doing the last months…really fun… Everything started when I built my 3D printer.
First, the posibility to design and build my own parts and second, how could I hack the components of a 3D printer to make something different? I have seen several interesting projects of robots that paint or manufacture PCBs, etc … but I was looking for something different… My daughter loves the Air Hockey game and I love robotics so one day an idea was born in my mind… can I construct…?? … Mmmmm …. it seemed very complicated and with many unresolved questions (puck detection?? Based on the idea of using standard RepRap 3D printer parts : NEMA17 stepper motors, drivers, Arduino Mega, RAMPS, belts , bearings, rods, printed pieces … I started to develop the project. I bought some wood boards and wood slats and began to mount the table.
Meanwhile I was designing the robot parts. Everything mounted and …working! Air hockey 3D printed parts Jose Julio. 3D Printed Quadcopter with Arduino : Propeller Setup. How to build a ping-pong ball display. If you’ve been lusting after your own glowing display we’re here to help by sharing some simple building techniques that will result in an interesting project like the one you see above.
This is a super-accurate clock That uses ping-pong balls as diffusers for LEDs, but with a little know-how you can turn this into a full marquee display. Join me after break where I’ll share the details of the project and give you everything you need to know to build your own. Planning Take some time to sit down and figure out how many pixels you need in your display. Above you can see the sketch that I drew on the back of some junk mail. I also did quite a bit of planning at this point for the electronics. Materials. Raspberry Pi and the Camera Pi module: face recognition tutorial. An area of application of Computer Vision, one that has always fascinated people, concerns the capability of robots and computers in general to determine, recognize and interact with human counterparts.
In this article we will take advantage of the availability of cheap tools for computing and image acquisition, like Raspberry Pi and his dedicated video camera, Camera Pi, and of open source software products for image acquisition and processing, such as OpenCV and SimpleCV, that allow a high level approach to this discipline, and therefore quite a simplified one.
In this post we present the possibility to locate, within the context of pictures, human beings or their parts like faces, eyes, nose, and so on. This functionality is available in the most advanced photo gallery applications, and it is currently in the implementation phase as for social network applications. The Recognition Method Despite the simplification, the processing volume is still too big to be efficient. Apt-get update. Lucid Dreaming with Plastic Milk Cartons. Being aware that oneself is in a dream can be a difficult moment to accomplish.
But as [Rob] showed on his blog, monitoring the lucid experience once it happens doesn’t have to be costly. Instead, household items can be fashioned together to make a mask that senses REM sleep cycles. We were tipped off to the project by [Michael Paul Coder] who developed an algorithm to communicate inside a dream. [Rob] cut up plastic milk cartons for this ‘DreamJacker’ project and attached a webcam to produce a simple way to detect eye movements. A standard game adapter with a triangular array of white LED’s was added to the plastic cover in order to provide the necessary illumination needed for the camera. The mask is tied to the back of the head with shoelaces, and acts like an eye patch during Wake Back to Bed sessions (WBTB). Since writing his post, [Rob] has since adapted a mouse for use inside the mask cup to integrate with the LucidScribe REM FIELD-mouse plugin developed by [Michael Paul Coder].
Open Source Marker Recognition for Augmented Reality. [Bharath] recently uploaded the source code for an OpenCV based pattern recognition platform that can be used for Augmented Reality, or even robots.
It was built with C++ and utilized the OpenCV library to translate marker notations within a single frame. The program started out by focusing in on one object at a time. This method was chosen to eliminate the creation of additional arrays that contained information of all of the blobs inside the image; which could cause some problems. Although this implementation did not track marker information through multiple frames, it did provide a nice foundation for integrating pattern recognition into computer systems.