Computer learning to read lips to detect emotions Open the pod bay doors , HAL . Scientists in Malaysia are teaching a computer to interpret human emotions based on lip patterns. The system could improve the way we interact with computers and perhaps allow disabled people to use computer-based communications devices, such as voice synthesizers, more effectively and more efficiently, says Karthigayan Muthukaruppan of Manipal International University. The system uses a genetic algorithm that gets better and better with each iteration to match irregular ellipse-fitting equations to the shape of a human mouth displaying different emotions. They have used photos of individuals from Southeast Asia and Japan to train a computer to recognize the six commonly accepted human emotions — happiness, sadness, fear, angry, disgust, surprise — and a neutral expression. The upper and lower lip is each analyzed as two separate ellipses by the algorithm. No word if the system will be deployed on the manned mission to Mars.
The Most Manipulative Use of Kinect Imaginable The good people of GeekWire spotted a patent application from Microsoft that envisions using Kinect to figure out your mood, and target ads at you accordingly. The application, filed back when Kinect was rather new (in December of 2010) was made public this week. (It’s not the first Microsoft patent application expressing an interest in tracking users’ moods.) How exactly would it work? The idea is that Kinect’s motion and facial recognition technology could figure out whether you’re sad or happy, and serve up ads that jive with your mood. “If the user on the videos or images from the webcams is dancing, the advertisement engine may assign a positive emotional state, such as, glad or happy, to the user…If the user on the videos or images from the computing device, e.g., Microsoft Kinect, is screaming, the advertisement engine may assign a negative emotional state, such as, upset, to the user. Good thinking, Kinect! Then again, advertisements are an inescapable feature of modern life.
Qualcomm présente le potentiel du Snapdragon S4 en vidéo Le constructeurQualcomm vient de mettre en ligne deux vidéos de présentation mettant en avant les performances de ses puces Snapdragon S4 et Snapdragon S4 Prime. Nouvelle variante de l’architecture Snapdragon, le S4 Prime est une puce dédiée à l’univers de la télévision connectée comme l’illustre la vidéo ci contre. On y aperçois pêle-mêle les possibilités vidéoludiques, multimédias et sociales de la puce avec au centre de tout la reconnaissance faciale et la détection avancée des mouvements « à la kinect ». La seconde vidéo permet de se faire une idée plus précise de la puissance du Snapdragon S4 dans un environnement Windows 8 RT. Il est ici question de jeu multijoueur à l’autre bout du globe entre deux terminaux de test Qualcomm.
Leap Motion gesture control technology hands-on Leap Motion unveiled its new gesture control technology earlier this week, along with videos showing the system tracking ten fingers with ease and a single digit slicing and dicing a grocery store's worth of produce in Fruit Ninja. Still, doubts persisted as to the veracity of the claim that the Leap is 200 times more accurate than existing tech. So, we decided to head up to San Francisco to talk with the men behind Leap, David Holz and Michael Buckwald, and see it for ourselves. Leap motion control technology hands-on See all photos 6 Photos Before diving into the more technical details of the device he created, Holz told us about the genesis of his idea to create a better way for humans to interact with their computational devices. We asked both Holz, and Buckwald about the underlying technology that enables such high-fidelity controls, and were told that it's an optical system that tracks your fingers with infrared LEDs and cameras in a way unlike any other motion control tech.