background preloader

Introduction to programming with OpenCV

Introduction to programming with OpenCV
Related:  Python webcam image captureComputer Vision

FaceDetector.Face Class Overview A Face contains all the information identifying the location of a face in a bitmap. Summary Constants public static final float CONFIDENCE_THRESHOLD The minimum confidence factor of good face recognition Constant Value: 0.4 public static final int EULER_X The x-axis Euler angle of a face. Constant Value: 0 (0x00000000) public static final int EULER_Y The y-axis Euler angle of a face. Constant Value: 1 (0x00000001) public static final int EULER_Z The z-axis Euler angle of a face. Constant Value: 2 (0x00000002) Public Methods public float confidence () Returns a confidence factor between 0 and 1. public float eyesDistance () Returns the distance between the eyes. public void getMidPoint (PointF point) Sets the position of the mid-point between the eyes. Parameters public float pose (int euler) Returns the face's pose. Returns the Euler angle of the of the face, for the given axis

Cell Counting - MATLAB & Simulink Example This example shows how to use a combination of basic morphological operators and blob analysis to extract information from a video stream. In this case, the example counts the number of E. Coli bacteria in each video frame. Note that the cells are of varying brightness, which makes the task of segmentation more challenging. Introduction This example illustrates how to use the morphological and BlobAnalysis System objects to segment individual cells and count them. Initialization Use these next sections of code to initialize the required variables and objects. VideoSize = [432 528]; Create a System object™ to read video from avi file. filename = 'ecolicells.avi'; hvfr = vision.VideoFileReader(filename, ... Create two morphological dilation System objects which are used to remove uneven illumination and to emphasize the boundaries between the cells. hdilate1 = vision.MorphologicalDilate('NeighborhoodSource', 'Property', ... hautoth = vision.Autothresholder( ... hblob = vision.BlobAnalysis( ...

Face Detection and Face Recognition with Real-time Training from a Camera To improve the recognition performance, there are MANY things that can be improved here, some of them being fairly easy to implement. For example, you could add color processing, edge detection, etc. You can usually improve the face recognition accuracy by using more input images, atleast 50 per person, by taking more photos of each person, particularly from different angles and lighting conditions. If you cant take more photos, there are several simple techniques you could use to obtain more training images, by generating new images from your existing ones: You could create mirror copies of your facial images, so that you will have twice as many training images and it wont have a bias towards left or right. You could translate or resize or rotate your facial images slightly to produce many alternative images for training, so that it will be less sensitive to exact conditions. You could add image noise to have more training images that improve the tolerance to noise.

Tutorial: OpenCV haartraining (Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features) - Naotoshi Seo Tutorial: OpenCV haartraining (Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features) Objective The OpenCV library provides us a greatly interesting demonstration for a face detection. Furthermore, it provides us programs (or functions) that they used to train classifiers for their face detection system, called HaarTraining, so that we can create our own object classifiers using these functions. It is interesting. However, I could not follow how OpenCV developers performed the haartraining for their face detection system exactly because they did not provide us several information such as what images and parameters they used for training. My working environment is Visual Studio + cygwin on Windows XP, or on Linux. FYI: I recommend you to work haartrainig with something different concurrently because you have to wait so many days during training (it would possibly take one week). A picture from the OpenCV website History Data Prepartion Positive (Face) Images

algorithm - Image Segmentation using Mean Shift explained CameraCapture Here is a simple framework to connect to a camera and show the images in a Window. Sarin Sukumar A DSP Engineer - sarinsukumar@gmail.com Information to control the camera parameters from program. User can control Output format of the camera (YUV2, RGB etc), Brightness, exposure, autofocus, zoom, white balance etc. I have done it for USB cam and OpenCV 2.0. bool CvCaptureCAM_DShow::open( int _index ) { '' * '' bool result=false; long min=0, max=0, currentValue=0, flags=0, defaultValue=0, stepAmnt=0; close(); #ifdef DEFAULT VI.deviceSetupWithSubtype(_index,640,480,_YUY2); #endif #ifdef MEDIUM VI.deviceSetupWithSubtype(_index,1280,1024,_YUY2); #endif #ifdef ABOVE_MEDIUM VI.deviceSetupWithSubtype(_index,1600,1200,_YUY2); #endif #ifdef HIGH VI.deviceSetupWithSubtype(_index,2048,1536,_YUY2); #endif //VI.showSettingsWindow(_index); //custome code if( ! “getVideoSettingFilter()” Make this line of code like this

PythonInterface Information on this page is deprecated, check the latest always-up-to-date documentation at . Starting with OpenCV release 2.2 , OpenCV will have completed it's new Python interface to cover all the C and C++ functions directly using numpy arrays. (The previous Python interface is described in SwigPythonInterface .) See notes on new developments at OpenCV Meeting Notes 2010-09-28 under "Vadim" subsection. Some highlights of the new bindings: single import of all of OpenCV using "import cv" OpenCV functions no longer have the "cv" prefix Simple types like CvRect and CvScalar use Python tuples Sharing of Image storage, so image transport between OpenCV and other systems (e.g. numpy and ROS) is very efficient Full documentation for the Python functions in Cookbook Convert an image import cv cv.SaveImage("foo.png", cv.LoadImage("foo.jpg")) Compute the Laplacian Notes

OpenCV 3 Image Thresholding and Segmentation Thresholding Thresholding is the simplest method of image segmentation. It is a non-linear operation that converts a gray-scale image into a binary image where the two levels are assigned to pixels that are below or above the specified threshold value. cv2.threshold(src, thresh, maxval, type[, dst]) This function applies fixed-level thresholding to a single-channel array. The function returns the computed threshold value and thresholded image. src - input array (single-channel, 8-bit or 32-bit floating point). Picture source: threshold dst - output array of the same size and type as src. Thresholding - code and output The code looks like this: Output: Original images are available : gradient.png and circle.png Adaptive Thresholding Using a global threshold value may not be good choicewhere image has different lighting conditions in different areas. cv.AdaptiveThreshold(src, dst, maxValue, adaptive_method=CV_ADAPTIVE_THRESH_MEAN_C, thresholdType=CV_THRESH_BINARY, blockSize=3, param1=5) where:

drive a webcam with python I bought a USB webcam off of eBay quite some time ago, and I decided to connect it to my telescope with a little bit of hardware hackery. I’ll have to see about posting a writeup on how I did that at a later time. Anyway, when I installed my camera software, I quickly found how horrible the program was. It gave a tiny preview of what the camera saw, and had no way of capturing images or video without waaaay too many clicks of the mouse. That’s when I decided to write my own in Python. The main libraries that I ended up using were VideoCapture, PIL, and pygame. Here’s the code: I decided to use pygame in order to build this because it can actually handle the fps that I need for video. A couple of noteworthy points: The function on line 15 is simply there to help automate displaying information on the screen. If you’re trying to write a webcam app of your own, I hope this gets you pointed in the right direction.

OpenVIDIA/python - OpenVIDIA From OpenVIDIA This article describes how to get started using Python for computer vision. Why use Python at all? So lets say you have written some GPU "core" kernel. You'd like to wrap it with webcam, or video functions, or need it hooked up to other libraries, perhaps even other GPU libraries. Here's comes a high level scripting language to the rescue. Python is a `very high level' language that we'll use to take many different types of functionality such as file handling, web interface, database management and tie these to GPU computer vision by way of accessing existing GPU Computer vision libraries such as NPP, OpenCV and of course OpenVIDIA. So you can grab our favorite low level image processing GPU library for early processing, and maybe some GPU SVMs or solvers, hook in a database interface for image recogntion, and mix them with your work to make a whole pipeline without fuss. Installation We start by installing the following: Python 2.6 (32-bit version). while True: repeat()

Training Haar Cascades | memememe For better or worse, most cell phones and digital cameras today can detect human faces, and, as seen in our previous post, it doesn’t take too much effort to get simple face detection code running on an Android phone (or any other platform), using OpenCV. This is all thanks to the Viola-Jones algorithm for face detection, using Haar-based cascade classifiers. There is lots of information about this online, but a very nice explanation can be found on the OpenCV website. (image by Greg Borenstein, shared under a CC BY-NC-SA 2.0 license) It’s basically a machine learning algorithm that uses a bunch of images of faces and non-faces to train a classifier that can later be used to detect faces in realtime. The algorithm implemented in OpenCV can also be used to detect other things, as long as you have the right classifiers. Actually, that last link is for more than just iPhones. Similar to what we want, but since we have a very specific phone to detect, we decided to train our own classifier. 1.

Capturing frames from a webcam on Linux :: Joseph Perla Not many people are trying to capture images from their webcam using Python under Linux and blogging about it. In fact, I could find nobody who did that. I found people capturing images using Python under Windows, and people capturing images using C under Linux, and finally some people capturing images with Python under Linux but not blogging about it. This instructional post I wrote to help those people who want to start processing images from a webcam using the great Python language and a stable Linux operating system. There is a very good library for capturing images in Windows called VideoCapture. It works, and a number of people blogged about using it. There are a number of very old libraries which were meant to help with capturing images on Linux: libfg, two separate versions of pyv4l, and pyv4l2. Finally, I learned that OpenCV has an interface to V4L/V4L2. Plus, OpenCV has very complete Python bindings. This is example utility code.

Related: