background preloader

GazeHawk - Webcam Eye Tracking - Ad Testing and Optimization

GazeHawk - Webcam Eye Tracking - Ad Testing and Optimization
Related:  Computer Vision

Predict Gaze Gender Classification with OpenCV — OpenCV v2.4.9 documentation Introduction A lot of people interested in face recognition, also want to know how to perform image classification tasks like: Gender Classification (Gender Detection)Emotion Classification (Emotion Detection)Glasses Classification (Glasses Detection)... This is has become very, very easy with the new FaceRecognizer class. Prerequisites For gender classification of faces, you’ll need some images of male and female faces first. Angelina JolieArnold SchwarzeneggerBrad PittEmma WatsonGeorge ClooneyJennifer LopezJohnny DeppJustin TimberlakeKaty PerryKeanu ReevesNaomi WattsPatrick StewartTom Cruise Once you have acquired some images, you’ll need to read them. Let’s dissect the line. /home/philipp/facerec/data/gender/male/keanu_reeves/keanu_reeves_01.jpg;0 /home/philipp/facerec/data/gender/male/keanu_reeves/keanu_reeves_02.jpg;0 /home/philipp/facerec/data/gender/male/keanu_reeves/keanu_reeves_03.jpg;0 ... All images for this example were chosen to have a frontal face perspective. Running the Demo

blog:gender_classification [ My last post was very long and I promise to keep this one short. In this post I want to do gender classification on a set of face images and find out which are the specific features faces differ in. Dataset I have used the celebrity dataset from my last post and downloaded images for Emma Watson, Naomi Watts and Jennifer Lopez. So the database has 8 male and 5 female subjects, each with 10 images. All images were chosen to have a frontal face perspective and have been cropped to a size of 140x140 pixels, just like this set of images. Experiments Get the code from github by either cloning the repository: $ git clone or downloading the latest version as a tar or zip archive. Then startup Octave and make the functions available: $ cd facerec/m $ octave octave> addpath(genpath(".")); octave> fun_fisherface = @(X,y) fisherfaces(X,y); % no parameters needed octave> fun_predict = @(model,Xtest) fisherfaces_predict(model,Xtest,1); % 1-NN Fisherface and Conclusion

blog:fisherfaces [ Some time ago I have written a post on Linear Discriminant Analysis, a statistical method often used for dimensionality reduction and classification. It was invented by the great statistician Sir R. A. Fisher, who successfully used it for classifying flowers in his 1936 paper "The use of multiple measurements in taxonomic problems" (The famous Iris Data Set is still available at the UCI Machine Learning Repository.). This was also recognized by Belhumeur, Hespanha and Kriegman and so they applied a Discriminant Analysis to fae recognition in their paper "Eigenfaces vs. You seldomly see something explained with code and examples, so I thought I'll change that. I've put all the code under a BSD License, so feel free to use it for your projects. There's also a much simpler implementation in the documents over at: You'll find all code shown in these documents in the projects github repository: Experiments

GENDER CLASSIFICATION AForge.NET AForge.NET is a computer vision and artificial intelligence library originally developed by Andrew Kirillov for the .NET Framework. The source code and binaries of the project are available under the terms of the Lesser GPL and the GPL (GNU General Public License). Another (unaffiliated) Google Code project named Accord.NET extends the features of the original AForge.NET library. Features[edit] The framework's API includes support for: Complete list of features is available on the features page of the project. The framework is provided not only with different libraries and their sources, but with many sample applications, which demonstrate the use of this framework, and with documentation help files, which are provided in HTML Help format. See also[edit] External links[edit]

Level 3c - How To Improve Face Detection | EmguCV and C# WinForms QUESTION: "Face detection result is not up to my expectations/needs! How do I improve the detection results?!"Simple Solution: Is your problem something like: not all faces are being detected ? or the bigger, smaller, farther, a bit tilted/rotated faces are not detected? 1. However, we are here to learn face detection in C# using EmguCV and the concepts that apply to OpenCV also hold true for EmguCV , so I have arranged all the necessary things from his article, that you need to know, right here in this tutorial. 1.1- The Haar Cascade I had mentioned in that the first parameter in the call to DetectHaarCascade() is the XML file from which, trained data is loaded for the Haar classifier. It's also possible to create your own, custom XML file using the HaarTraining application, in OpenCV's apps directory. DO NOT try training an XML if one already exists and is available for use over the internet. it'll waste your time and energy 1.2- Scale Increase Rate 1.3- Minimum Neighbors Threshold

Face Tracking with CAMShift using OpenCV/SimpleCV | Paranoid Android import cv2 import as cv def camshift_tracking(img1, img2, bb): hsv = cv2.cvtColor(img1, cv.CV_BGR2HSV) mask = cv2.inRange(hsv, <span class="skimlinks-unlinked">np.array((0</span>., 60., 32.)), <span class="skimlinks-unlinked">np.array((180</span>., 255., 255.))) x0, y0, w, h = bb x1 = x0 + w -1 y1 = y0 + h -1 hsv_roi = hsv[y0:y1, x0:x1] mask_roi = mask[y0:y1, x0:x1] hist = cv2.calcHist( [hsv_roi], [0], mask_roi, [16], [0, 180] ) cv2.normalize(hist, hist, 0, 255, cv2.NORM_MINMAX); hist_flat = hist.reshape(-1) prob = cv2.calcBackProject([hsv,cv2.cvtColor(img2, cv.CV_BGR2HSV)], [0], hist_flat, [0, 180], 1) prob &= mask term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 ) new_ellipse, track_window = cv2.CamShift(prob, bb, term_crit) return track_window def face_track(): cap = cv2.VideoCapture(0) img = <span class="skimlinks-unlinked"></span>() bb = (125,125,200,100) while True: try: img1 = <span class="skimlinks-unlinked"></span>() bb = camshift(img1, img, bb) img = img1

Facial Age Estimation Facial Age Estimation Age estimation is the determination of a person’s age based on biometric features. Although age estimation can be accomplished using different biometric traits, this article is focused on facial age estimation that relies on biometric features extracted from a person’s face. Problem Definition The appearance of a human face is affected considerably by aging (see Figure 1). Figure 1: Example of aging effects for a subject. In automatic facial age estimation the aim is to use dedicated algorithms that enable the estimation of a person’s age based on features derived from his/her face image. An important aspect of the age estimation problem is the formulation of suitable metrics for assessing the performance of age estimators. The facial age estimation problem shares similarities with the age progression problem. Motivation Age-Based Access Control: In some cases age-based restrictions apply to physical or virtual access. Challenges Typical Approaches Lanitis et al. A.K.