background preloader

Eye & Head Tracking

Facebook Twitter

Raspberry Pi Face Recognition Treasure Box. Protect your treasure from prying eyes without remembering a combination or carrying a key; your face is the key to unlock this box! This project will show you how to use a Raspberry Pi and Pi camera to build a box that unlocks itself using face recognition. The software for this project is based on algorithms provided by the OpenCV computer vision library. The Raspberry Pi is a perfect platform for this project because it has the power to run OpenCV, while being small enough to fit almost anywhere. This is an intermediate-level difficulty project which requires compiling and installing software on the Raspberry Pi. This project was one of the many excellent entries in our recent Raspberry Pi project contest. Optical Player Tracking | Sportvision. In partnership with Hego US and FOX Sports, Sportvision delivers a new graphics system that helps broadcasters visualize personnel changes on the field, call attention to interesting match-ups, and dissect and analyze plays in new ways.

Especially with wide angle shots, it can be difficult for fans to see who is lined up left or right, who is back to receive the punt, or who is lined up in the slot. Optical player tracking enables new on-air enhancements to help fans track the action in real-time, much like they do today in NASCAR via RACEf/x pointers and graphics. Hego US’ optical tracking software includes two banks of eight unmanned cameras set up high in the stadium at adjacent 35-yard lines.

The cameras track all moving objects and technicians identify and tag players by number. Once on-screen, the effects remain until removed, enabling analysts to quickly point out a particular player and follow him throughout an entire play. Motion Tracking on the Cheap with a PIC. The Eye Tribe • View topic - [Resolved] get the Server data. 2D Room Mapping With a Laser and a Webcam. Getting Started | eyetribe-docs. When you download and install the EyeTribe SDK, EyeTribe Server and EyeTribe UI are installed on your computer. This tutorial serves as a starting point to get you started with the EyeTribe UI for Windows. Starting EyeTribe UI EyeTribe UI application is started either from the icon on the desktop or from the TheEyeTribe folder located in your start menu inside All Programs. The software is by default installed in C:\Program Files (x86)\EyeTribe\. In this folder you will find two sub folders, Client and Server. It is important that The EyeTribe Server is running if you wish to use EyeTribe UI or any other eye-controlled application.

The EyeTribe UI will automatically attempt to start the Server if it is not running already. Figure: EyeTribe UI (white) and EyeTribe Server (black) icons. The EyeTribe UI provides a direct feedback of the current tracking state and allows you to change the default settings to accommodate your needs. Figure: screenshot of EyeTribe UI user interface. The Trackbox. Basics | eyetribe-docs. Eye tracking, or gaze tracking, is a technology that consists in calculating the eye gaze point of a user as he or she looks around. A device equipped with an eye tracker enables users to use their eye gaze as an input modality that can be combined with other input devices like mouse, keyboard, touch and gestures, referred as active applications. Furthermore, eye gaze data collected with an eye tracker can be employed to improve the design of a website or a magazine cover, which are described more thoroughly later on as passive applications.

Applications that can benefit from eye tracking include games, OS navigation, e-books, market research studies, and usability testing. The Eye Tribe Tracker is an eye tracking system that can calculate the location where a person is looking by means of information extracted from person’s face and eyes. Figure 1. The user needs to be located within the Tracker’s trackbox. Computers Watching Movies | benjamin grosser.

Computers Watching Movies (Exhibition Cut)computationally-produced HD video with stereo audio(please play full screen) Computers Watching Movies shows what a computational system sees when it watches the same films that we do. The work illustrates this vision as a series of temporal sketches, where the sketching process is presented in synchronized time with the audio from the original clip. Viewers are provoked to ask how computer vision differs from their own human vision, and what that difference reveals about our culturally-developed ways of looking. Why do we watch what we watch when we watch it?

Will a system without our sense of narrative or historical patterns of vision watch the same things? Computers Watching Movies was computationally produced using software written by the artist. 2001: A Space Odyssey Computers Watching Movies (2001: A Space Odyssey)computationally-produced HD video with stereo audio(please play full screen) American Beauty Inception The Matrix Taxi Driver Press. This Is Your Computer Watching The Matrix. “For example, in the explosion part of the Inception scene, the systems shifts its attention to the rapidly moving bits and pieces of the buildings as they fly across the screen,” as seen above. Would the artist contrast this visually to human eye-tracking? ”I haven’t paired it with eyetracking software, but I definitely recommend the great work by Tim Smith and David Bordwell on eyetracking movie watching,” Grosser says. ”They’ve done studies where they correlate a number of eyetracked subjects, and use that data to create clips so you can see where everyone focuses their attention.”

But are we watching when we’re watching the watchers? “I think Computers Watching Movies allows the viewer to do this a bit for themselves; as the viewer watches a clip they engage their memory of the scene and try to compare what they recall looking at with what the computer is looking at.” We’re literally superimposing our memories; it’s a meta sort of synesthesia.

1.4. Matplotlib: plotting — Scipy lecture notes. 1.4.2. Simple plot Tip In this section, we want to draw the cosine and sine functions on the same plot. Starting from the default settings, we’ll enrich the figure step by step to make it nicer. First step is to get the data for the sine and cosine functions: import numpy as np X = np.linspace(-np.pi, np.pi, 256, endpoint=True)C, S = np.cos(X), np.sin(X) X is now a numpy array with 256 values ranging from -π to +π (included). To run the example, you can type them in an IPython interactive session: This brings us to the IPython prompt: IPython 0.13 -- An enhanced Interactive Python. ? You can also download each of the examples and run it using regular python, but you will loose interactive data manipulation: You can get source for each step by clicking on the corresponding figure. 1.4.2.1. Matplotlib comes with a set of default settings that allow customizing all kinds of properties. 1.4.2.2. 1.4.2.3. 1.4.2.4. ...pl.xlim(X.min() * 1.1, X.max() * 1.1)pl.ylim(C.min() * 1.1, C.max() * 1.1)...

Hint. RGBDToolkit - DSLR + DEPTH Filmmaking | Home. Google launches Glass Dev Kit preview, shows off augmented reality apps. Today Google launched the Glass Development Kit (GDK) "Sneak Preview," which will finally allow developers to make real, native apps for Google Glass. While there have previously been extremely limited Glass apps that used the Mirror API, developers now have full access to the hardware. Google Glass runs a heavily skinned version of Android 4.0.4, so Glass development is very similar to Android development. The GDK is downloaded through the Android SDK Manager, and Glass is just another target device in the Eclipse plugin.

Developers have access to the Glass voice recognition within their app as an intent, but it looks like only Google can add "OK, Glass" commands to the main voice menu. Apps can be totally offline and can do all their processing on Glass. They can also support background events and have full access to the camera and other hardware. Update: Documentation for the GDK has come out, and any developer can add a voice trigger to the "Ok Glass" menu.

Why facial recognition failed. It’s a staple of TV dramas — the photograph of a suspect is plugged into a law enforcement database, and a few minutes later: presto! We have a match! Facial recognition for the win! Except the magic didn’t work in the case of the Boston bombers, according to the Boston law enforcement authorities. The surveillance society did a face plant. What happened? I called up Carnegie Mellon computer scientist Alessandro Acquisti, an expert in online privacy who has conducted some provocative research involving facial recognition. In a series of experiments, Acqusti and his fellow researchers were able to use “off-the-shelf” facial recognition software to identify individuals by comparing photos from a dating site, or taken offline with camera phones, with photos uploaded to Facebook. Acquisti explained to Salon what he thinks might be happening on Monday morning. The first is image quality. The third issue — which I don’t think played a role here — is computational costs.

What’s going to change? Pourquoi on reconnaît ses amis dans la foule, même quand on ne voit pas leur visage. Vous êtes dans la rue, et au loin, mais vraiment loin, tellement loin que vous ne pouvez pas distinguer son visage, vous reconnaissez un(e) de vos ami(e)s. Mais, si vous n’avez pas vu sa tête, comment donc avez-vous fait pour savoir qui c’était? Grâce à sa morphologie, expliquent les psychologues de l’université de Dallas-Texas, qui publient leur étude dans Psychological Science.

Pour arriver à ce résultat, ils ont montré à des participants des photos où l’on distinguait mal le visage des personnes. Comme au jeu du memory, il fallait trouver les images qui présentaient les mêmes personnes, explique le Pacific Standard, des images que des logiciels de reconnaissance faciale n’avaient pas réussi à identifier. publicité Le résultat de l’étude montre que les participants s’en sont mieux sortis lorsqu’ils pouvaient «voir les corps des personnes sur les photographies», ce qui n'était pas le cas «quand ils ne pouvaient voir que leur visage».

À lire aussi sur Slate.fr. Real-Time Adaptive 3D Face Tracking and Eye Gaze Estimation. BlackHat USA 2011: Faces Of Facebook-Or, How The Largest Real ID Database In The World Came To Be. Hands-on with Google’s latest acquisition: Flutter, a webcam gesture app. A company called Flutter has just announced that it has been purchased by Google. Flutter is a simple Windows and Mac OSX app that lets you control popular media players through a webcam. Just put your hand up to stop the media playback, or point your thumb right for "next" and left for "previous. " It seems that few people had heard of Flutter (yours truly included) until Google took the company under its wing, but luckily the app is still available for download, so we snagged it and gave it a quick test. The app works fantastically well, and hand gesture detection is near-instant. It works with iTunes, Spotify, Rdio, VLC, Keynote, Winamp, Windows Media Player, and, with a Chrome extension, Youtube, Netflix, Pandora, and Grooveshark.

Considering the length of that compatibility list, we suspect it's converting your hand signals into the standard media controls that adorn many keyboards. The next question is, what will Google do with a webcam gesture app? Apple patents eye-tracking 3D technology for iPhone, iPad, or Mac | Apple. 3D is among the least liked technologies for many geeks. Sure, it can be cool to add an extra dimension to your content, but it’s been pushed so clumsily by TV and gadget manufacturers that it feels more like a forced gimmick than an exciting new way of interacting with content.

The eye strain that it often induces doesn’t exactly help either. With that in mind, you may want to take this with a few extra grains of salt. Apple has patented a method of presenting 3D content on its devices that wouldn’t require glasses or even a stereoscopic display. It uses various sensors that are already in the devices to determine the device’s point in space in relation to the position of your eyes, and adjust the image accordingly. We’ve seen similar approaches to 3D before, but this takes it farther.

There’s actually a demonstrative app (called i3D) in the App Store that shows off a similar approach to 3D. One of the coolest parts of Apple’s patent is that the 3D would be activated with a 3D gesture. Hockey Fans to Test Facial Recognition Technology. Adding An Eye-Tracker To An Android. Last April Denmark-based start-up The Eye Tribe demonstrated prototype eye-tracking technology for mobile devices. Its system bounces infrared light off the user’s pupils; that’s not particularly new; The Eye Tribe’s twist is using existing processors in a device to process the tracking data. This month, the company began taking orders for a US $99 developers kit; the company hopes that the kit will turn out to be a holiday 2013 stocking stuffer for the developer in your life.

For 2013, the kit will just be available for Windows tablets (photo above); coming in early 2014, the company says, will be the Android kit (photo below). The company doesn’t expect to see its infrared attachment hanging off of every mobile device; rather, it plans, by getting developers to start working with its technology, to have a head start when device manufacturers decide to build infrared systems into their products. I tried out a prototype of the technology in April (see video, above). Photo: The Eye Tribe. Why facial recognition tech failed in the Boston bombing manhunt.

In the last decade, the US government has made a big investment in facial recognition technology. The Department of Homeland Security paid out hundreds of millions of dollars in grants to state and local governments to build facial recognition databases—pulling photos from drivers' licenses and other identification to create a massive library of residents, all in the name of anti-terrorism. In New York, the Port Authority is installing a "defense grade" computer-driven surveillance system around the World Trade Center site to automatically catch potential terrorists through a network of hundreds of digital eyes.

But then an act of terror happened in Boston on April 15. Alleged perpetrators Dzhokhar and Tamerlan Tsarnaev were both in the database. Despite having an array of photos of the suspects, the system couldn't come up with a match. For people who understand how facial recognition works, this comes as no surprise. Not yet. Face detection and enhancement Matching and classification. Download - myEye Project. Opengazer: open-source gaze tracker for ordinary webcams. OpenEyes - Eye tracking for the masses. Free Eye Tracker API for Eye Tracking Integration. The S2 Eye Tracker supports an open standard eye-gaze interface. The interface uses TCP/IP for data communication and XML for data structures. Our vision is to see this API adopted by many eye tracker developers, providing application developers a standardized interface to eye gaze hardware.

For now this easy to use and free eye tracker API provides a simple way to interface with the S2 Eye Tracker. The S2 Eye Tracker API requires no software download whatsoever. Download: Open Eye-gaze API Version 1.0 The S2 Eye Tracker API is extremely flexible. Currently, we have example source code for C, C++/MFC, C++/CLI, C#, Python and MATLAB. The S2 Eye Tracker (and its older S1 predecessor) smoothly works with MATLAB. Inexpensive or Free Head & Eye Tracking.