background preloader

OPENCV \ library

OPENCV \ library
OpenCV is an open source computer vision library originally developed by Intel. It is free for commercial and research use under a BSD license. The library is cross-platform, and runs on Mac OS X, Windows and Linux. This implementation is not a complete port of OpenCV. real-time capture video file import basic image treatment (brightness, contrast, threshold, …) object detection (face, body, …) blob detection Future versions will include more advanced functions such as motion analysis, object and color tracking, multiple OpenCV object instances … For more information about OpenCV visit the Open Source Computer Vision Library Intel webpage, the OpenCV Library Wiki, and the OpenCV Reference Manual (pdf). Installation instructions Optionally, you can download these OpenCV Processing examples or, for pure Java users, these OpenCV Java samples. Documentation Credits The OpenCV Processing Library is a project of the Atelier hypermédia at the École Supérieure d'Art d'Aix-en-Provence. Related:  openCVjavaComputer Vision

BlobDetection library / v3ga This library is aimed at doing computer vision by finding ‘blobs’ on an image , that is to say areas whose brightness is above or below a particular value. It allows to compute blobs’edges as well as blobs’bounding box. However, this library does not perform blob tracking, it only tries to find all blobs each frame it was fed with. It was primarly developped for Processing but can be used in any java programs. October 2012 I just updated the examples of the library to work with the latest version of Processing (2.0b3). The examples still work with Processing 1.5.1 but you have to see the instructions in the code. August 2011 Wow. September 2006 Publishing this new website. December 2005 Added computation of triangles for each blobs, allowing fast drawing of filled blobs. May 2005 Release as .jar, which can be included in Processing (BETA). June 2004 First release of an EdgeDection procedure.

OpenCV History[edit] Advance vision research by providing not only open but also optimized code for basic vision infrastructure. No more reinventing the wheel.Disseminate vision knowledge by providing a common infrastructure that developers could build on, so that code would be more readily readable and transferable.Advance vision-based commercial applications by making portable, performance-optimized code available for free—with a license that did not require to be open or free themselves. The first alpha version of OpenCV was released to the public at the IEEE Conference on Computer Vision and Pattern Recognition in 2000, and five betas were released between 2001 and 2005. The second major release of the OpenCV was on October 2009. In August 2012, support for OpenCV was taken over by a non-profit foundation,, which maintains a developer[2] and user site.[3] Applications[edit] OpenCV's application areas include: Programming language[edit] OS support[edit] Windows prerequisites[edit]

Welcome Benjamin Bojko It’s been quite a while since my last webcam tracking post but some magic has happened behind the curtains and two projects emerged during the last weeks which I don’t want to leave unmentioned. The guys at Neue Digitale did a pretty sweet job with their Audi Quattro Urban Curving Online Special which offers a game that lets you steer your way down steep roads by tilting your head left and right with the help of my tracking code. Mariusz Kreft and Matthias Gomille from argonautenG2 used my code for an awesome banner prototype as an entry for the Young Lions Award in Cannes. Let’s hope they rock the contest. Here’s their Home Sweet Home Cinema website : I’ve got a small update for my tracking code in the pipe which both of these projects are using.

OpenCV::detect() \ language (API) Detect object(s) in the current image depending on the current cascade description. This method finds rectangular regions in the current image that are likely to contain objects the cascade has been trained to recognize. It returns found regions as a sequence of rectangles. The default parameters (scale=1.1, min_neighbors=3, flags=0) are tuned for accurate (but slow) object detection. For a faster operation on real-time images, the more preferable settings are: scale=1.2, min_neighbors=2, flags=HAAR_DO_CANNY_PRUNING, min_size= Mode of operation flags: for each scale factor used the function will downscale the image rather than "zoom" the feature coordinates in the classifier cascade. If it is set, the function uses Canny edge detector to reject some image regions that contain too few or too much edges and thus can not contain the searched object. If it is set, the function finds the largest object (if any) in the image.

blobscanner - A blob detection library for Processing. Blobscanner is a small lightweight library for the Processing Programming Environment which can be used for blob detection and analysis in images and videos. It performs quite well, also, when the signal analyzed is a live stream. Please refer to the project blog for information about the status of this project. In the following image is visible an example of hand tracking using Blobscanner in conjunction with a skin detector algorithm. The hand center of mass can be easily computed with Blobscanner. In this video, Blobscanner is used to create a simple artistic effect on a web cam video stream. NOTE: the code repository for the new version (v. 0.1-alpha) is hosted on github If you wish you can donate to this project with Flattr or you can use Paypal on blobscanner site (here isn't allowed).

vision_opencv electric: Documentation generated on January 11, 2013 at 11:58 AMfuerte: Documentation generated on December 28, 2013 at 05:43 PMgroovy: Documentation generated on March 27, 2014 at 12:20 PM (job status).hydro: Documentation generated on March 27, 2014 at 01:33 AM (job status).indigo: Documentation generated on March 27, 2014 at 01:22 PM (job status). Documentation The vision_opencv stack provides packaging of the popular OpenCV library for ROS. For OpenCV vision_opencv provides several packages: cv_bridge: Bridge between ROS messages and OpenCV. image_geometry: Collection of methods for dealing with image and pixel geometry In order to use ROS with OpenCV, please see the cv_bridge package. As of electric, OpenCV is a system dependency. Using OpenCV in your ROS code You just need to add a dependency on opencv2 and find_package it in your CMakeLists.txt as you would for any third party package: Report an OpenCV specific Bug For issues specific to OpenCV: Tutorials

Exporting for iPhone using Air 2.7 and FlashDevelop - Part Two, Creating an iPhone Project | Code and Visual Continuing on from Part One of the Exporting for iPhone using Air 2.7 and FlashDevelop Tutorial you should now have FlashDevelop primed with the Flex/Air 2.7 SDK and you’re now ready to start building your amazing iPhone App. In Part Two you will see how to set up an iPhone Air project which will allow you to export an swf ready to be packaged up into an ipa file (the file format for iPhone Apps). Developing for iPhone (and mobile in general) We’ll skip through actually coding your application as this is a topic in-and-of itself, but for the most part it’s no different to coding any other SWF, you just have a few more considerations to make, eg. the touch interface and importantly, memory management. I don’t mean to brush over the topic but it will have to be covered in another tutorial. Installing an iPhone/Air Template into FlashDevelop In order to create iPhone projects the easiest thing to do is to install a template into FlashDevelop. iPhone Project Template Next Index

diewald_CV_kit diewald_CV_kit A library by Thomas Diewald for the Processing programming environment. Last update, 13/12/2012. -------------------------------------------------------------------------------- this library contains tools that are used in the field of computer vision. its not a wrapper of openCV or some other libraries , so maybe you are missing some features ( ... which may be implemented in the future). its designed to be very fast to use it for realtime applications (webcam-tracking, kinect-tracking, ...). also, it works very well in combination with the kinect-library (dlibs.freenect - the examples, that come with the library, demonstrates: kinect 3D/2D tracking a simple marker tracking image-blob tracking -------------------------------------------------------------------------------- online example: videos / screenshots: Processing Library - Computer Vision - diewald_CV_kit Download Installation Reference. Source.

HandVu Application Documentation homepage | index | basics | common | ARtk demo | troubleshooting HandVu Application Documentation This document assumes that you have successfully installed HandVu and at least one of the sample applications on your system. Please read at least the basics and common functionality. While a lot of functionality is common to all sample applications, some key shortcuts do not work in all applications, indicated with a (*). Basics: Setup and Hand Postures Camera: HandVu is designed for a camera that views the space in front of a sitting or standing person from a top-down view. Hand detection: The hand is detected only in a standard posture and orientaentirelytion with respect to the camera, called the closed posture: recognized postures. Hand tracking: Once the hand has been detected, you can move the hand around in any posture. Posture recognition: All of the six recognized postures can be performed at any time during tracking and they will be recognized. Common Functionality GestureServer

Canny edge detector The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works. Development of the Canny algorithm[edit] Canny's aim was to discover the optimal edge detection algorithm. good detection – the algorithm should mark as many real edges in the image as possible.good localization – edges marked should be as close as possible to the edge in the real image.minimal response – a given edge in the image should only be marked once, and where possible, image noise should not create false edges. Stages of the Canny algorithm[edit] Noise reduction[edit] The image after a 5x5 Gaussian mask has been passed across each pixel. Here is an example of a 5x5 Gaussian filter, used to create the image to the right, with = 1.4. Finding the intensity gradient of the image[edit] Non-maximum suppression[edit]