background preloader

Welcome to opencv documentation! — OpenCV v2.4.0-beta documentation

Welcome to opencv documentation! — OpenCV v2.4.0-beta documentation
Related:  Computer VisionComputer VisionResearch topic

OPENCV \ library OpenCV is an open source computer vision library originally developed by Intel. It is free for commercial and research use under a BSD license. The library is cross-platform, and runs on Mac OS X, Windows and Linux. It focuses mainly towards real-time image processing, as such, if it finds Intel's Integrated Performance Primitives on the system, it will use these commercial optimized routines to accelerate itself. This implementation is not a complete port of OpenCV. real-time capture video file import basic image treatment (brightness, contrast, threshold, …) object detection (face, body, …) blob detection Future versions will include more advanced functions such as motion analysis, object and color tracking, multiple OpenCV object instances … For more information about OpenCV visit the Open Source Computer Vision Library Intel webpage, the OpenCV Library Wiki, and the OpenCV Reference Manual (pdf). Installation instructions Documentation Credits

OpenCV 3 Image Thresholding and Segmentation Thresholding Thresholding is the simplest method of image segmentation. It is a non-linear operation that converts a gray-scale image into a binary image where the two levels are assigned to pixels that are below or above the specified threshold value. cv2.threshold(src, thresh, maxval, type[, dst]) This function applies fixed-level thresholding to a single-channel array. The function returns the computed threshold value and thresholded image. src - input array (single-channel, 8-bit or 32-bit floating point). Picture source: threshold dst - output array of the same size and type as src. Thresholding - code and output The code looks like this: Output: Original images are available : gradient.png and circle.png Adaptive Thresholding Using a global threshold value may not be good choicewhere image has different lighting conditions in different areas. cv.AdaptiveThreshold(src, dst, maxValue, adaptive_method=CV_ADAPTIVE_THRESH_MEAN_C, thresholdType=CV_THRESH_BINARY, blockSize=3, param1=5) where:

Raspberry Pi + OpenCV OpenCV is a suite of powerful computer vision tools. Here is a quick overview of how I installed OpenCV on my Raspberry Pi with debian6-19-04-2012. The guide is based on the official OpenCV Installation Guide on Debian and Ubuntu. Before you begin, make sure you have expanded your SD card to allow for the install of OpenCV. Its a big package with lots of dependencies. You can follow my instructions here. There are some dependency issues with the order of the install, mostly with regard to libjpeg issues, so be sure to install in this order. Next, pull down the source files for OpenCV using wget: wget Once finished downloading, extract the archive, remove the no longer needed archive (to save space), change directory to the top of the source tree, make a directory for the build, and change into it: tar -xvjpf OpenCV-2.3.1a.tar.bz2rm OpenCV-2.3.1a.tar.bz2cd OpenCV-2.3.1/mkdir buildcd build python .

XSL Transformations (XSLT) Abstract This specification defines the syntax and semantics of XSLT, which is a language for transforming XML documents into other XML documents. XSLT is designed for use as part of XSL, which is a stylesheet language for XML. In addition to XSLT, XSL includes an XML vocabulary for specifying formatting. XSL specifies the styling of an XML document by using XSLT to describe how the document is transformed into another XML document that uses the formatting vocabulary. XSLT is also designed to be used independently of XSL. Status of this document This document has been reviewed by W3C Members and other interested parties and has been endorsed by the Director as a W3C Recommendation. The list of known errors in this specification is available at Comments on this specification may be sent to xsl-editors@w3.org; archives of the comments are available. The English version of this specification is the only normative version. Table of contents

OpenCV History[edit] Advance vision research by providing not only open but also optimized code for basic vision infrastructure. No more reinventing the wheel.Disseminate vision knowledge by providing a common infrastructure that developers could build on, so that code would be more readily readable and transferable.Advance vision-based commercial applications by making portable, performance-optimized code available for free—with a license that did not require to be open or free themselves. The first alpha version of OpenCV was released to the public at the IEEE Conference on Computer Vision and Pattern Recognition in 2000, and five betas were released between 2001 and 2005. The first 1.0 version was released in 2006. The second major release of the OpenCV was on October 2009. In August 2012, support for OpenCV was taken over by a non-profit foundation, OpenCV.org, which maintains a developer[2] and user site.[3] Applications[edit] OpenCV's application areas include: Programming language[edit]

algorithm - Image Segmentation using Mean Shift explained Using a webcam with the Raspberry Pi This is a detailed post on how to get your fridge to autonomously order fruit for you when you are low. An RPi takes a picture every day and detects if you have fruit or not using my Caffe web query code. If your fridge is low on fruit, it orders fruit using Instacart, which is then delivered to your house. You can find the code with a walk through here: Some of my posts are things I end up using every day and some are proof of concepts that I think are interesting. Hacking up an Instacart API The first thing we need is a service that picks out food and delivers it to you. Head over to instacart.com and set up an account and login. That string is what you need to access your instacart account. curl You should get back a response that looks like this: Now we just need to figure out what different items are. Now, your cart should be full of fruit again. Detecting fruit in your fridge

The Forms Working Group The Forms working group is chartered by the W3C to develop the next generation of forms technology for the world wide web. The mission is to address the patterns of intricacy, dynamism, multi-modality, and device independence that have become prevalent in Web Forms Applications around the world. The technical reports of this working group have the root name XForms due to the use of XML to express the vocabulary of the forms technology developed by the working group. The Forms Working Group is comprised of W3C members and invited experts. To join, ask your W3C Advisory Committee Representative to use this link to nominate you and agree to the patent policy. 2013-03-12: Orbeon Forms 4.0 Released. 2012-08-07: Public working draft of XForms 2.0. 2011-04-26: Tutorial - An Introduction to XForms for Digital Humanists. 2011-02-01: betterForm 'limegreen': Betterform has a forthcoming release of their XForms system, codenamed limegreen. 2011-01-19: IBM Forms 4.0. 2010-06-30: Xfolite.

vision_opencv electric: Documentation generated on January 11, 2013 at 11:58 AMfuerte: Documentation generated on December 28, 2013 at 05:43 PMgroovy: Documentation generated on March 27, 2014 at 12:20 PM (job status).hydro: Documentation generated on March 27, 2014 at 01:33 AM (job status).indigo: Documentation generated on March 27, 2014 at 01:22 PM (job status). Documentation The vision_opencv stack provides packaging of the popular OpenCV library for ROS. For information about the OpenCV library, please see the OpenCV main page at links to complete documentation for OpenCV, as well as other OpenCV resources (like the bug tracker on For OpenCV vision_opencv provides several packages: cv_bridge: Bridge between ROS messages and OpenCV. image_geometry: Collection of methods for dealing with image and pixel geometry In order to use ROS with OpenCV, please see the cv_bridge package. As of electric, OpenCV is a system dependency. Using OpenCV in your ROS code

Training Haar Cascades | memememe For better or worse, most cell phones and digital cameras today can detect human faces, and, as seen in our previous post, it doesn’t take too much effort to get simple face detection code running on an Android phone (or any other platform), using OpenCV. This is all thanks to the Viola-Jones algorithm for face detection, using Haar-based cascade classifiers. There is lots of information about this online, but a very nice explanation can be found on the OpenCV website. (image by Greg Borenstein, shared under a CC BY-NC-SA 2.0 license) It’s basically a machine learning algorithm that uses a bunch of images of faces and non-faces to train a classifier that can later be used to detect faces in realtime. The algorithm implemented in OpenCV can also be used to detect other things, as long as you have the right classifiers. Actually, that last link is for more than just iPhones. Similar to what we want, but since we have a very specific phone to detect, we decided to train our own classifier. 1.

Book excerpt: Converting XML to spreadsheet, and vice versa Often it is useful for XML data to be presented as a spreadsheet. A typical spreadsheet (for example, a Microsoft Excel spreadsheet) consists of cells represented in a grid of rows and columns, containing textual data, numeric data, or formulas. An Excel spreadsheet defines some standard functions such as SUM and AVERAGE that you can specify in cells. The Apache Jakarta POI project provides the HSSF API to create an Excel spreadsheet from an XML document or to go the opposite way, parsing an Excel spreadsheet and converting to XML. Overview The Jakarta POI HSSF API provides classes to create an Excel workbook and add spreadsheets to the workbook. Listing 1. incomestatements.xml <? Creating an Eclipse project In this article, we create and parse an Excel spreadsheet using the Apache POI HSSF API. To compile and run the code examples, you will need an Eclipse project. Figure 2 shows the project directory structure.

Canny edge detector The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Development of the Canny algorithm[edit] Canny's aim was to discover the optimal edge detection algorithm. good detection – the algorithm should mark as many real edges in the image as possible.good localization – edges marked should be as close as possible to the edge in the real image.minimal response – a given edge in the image should only be marked once, and where possible, image noise should not create false edges. Stages of the Canny algorithm[edit] Noise reduction[edit] The image after a 5x5 Gaussian mask has been passed across each pixel. Because the Canny edge detector is susceptible to noise present in raw unprocessed image data, it uses a filter based on a Gaussian (bell) curve, where the raw image is convolved with a Gaussian filter. = 1.4. Finding the intensity gradient of the image[edit] Parameters[edit]

Related: