Perception

Facebook Twitter

Carnegie Mellon computer searches web 24/7 to analyze images and teach itself common sense. (Credit: Carnegie Mellon University) A computer program called the Never Ending Image Learner (NEIL) is now running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them.

Carnegie Mellon computer searches web 24/7 to analyze images and teach itself common sense

And as it builds a growing visual database, it is gathering common sense on a massive scale. NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision.

In turn, the data it generates will further enhance the ability of computers to understand the visual world. But NEIL also makes associations between these things to obtain common sense information: cars often are found on roads, buildings tend to be vertical, and ducks look sort of like geese. Drum (percussion) objects identified by NEIL (credit: CMU) You can view NEIL’s findings at the project website (or help train it): Microsoft Research demos Project Adam machine-learning object-recognition software. “Cortana, what breed is this?”

Microsoft Research demos Project Adam machine-learning object-recognition software

(credit: Microsoft Research) Microsoft Research introduced “Project Adam” AI machine-learning object recognition software at its 2014 Microsoft Research Faculty Summit. The goal of Project Adam is to enable software to visually recognize any object — an ambitious project, given the immense neural network in human brains that makes those kinds of associations possible through trillions of connections. Project Adam generated a massive dataset of 14 million images from the Web and sites such as Flickr, made up of more than 22,000 categories drawn from user-generated tags. Microsoft Research | A demo of Project Adam, using a dog-breed detection Using 30 times fewer machines than other systems, that data was used to train a neural network made up of more than two billion connections. The researchers did a live demo of dog-breed detection integrated into Project Adam’s technology, pointing to two dogs and asking, “Cortana, what dog breed is this?

Could Google Glass Track Your Emotional Response to Ads? The head-mounted device, presumably Google Glass, would communicate with a server, relaying information about pupil dilation captured by eye-tracking cameras.

Could Google Glass Track Your Emotional Response to Ads?

The system could store "an emotional state indication associated with one or more of the identified items" in an external scene, according to the patent. Though the patent specifies that "personal identifying data may be removed from the data and provided to the advertisers as anonymous analytics" in an opt-out system, the idea is to charge advertisers--using a new pay-per-gaze metric--when users view ads online, on billboards, magazines, newspapers, and other types of media.

"Thus, the gaze tracking system described herein offers a mechanism to track and bill offline advertisements in the manner similar to popular online advertisement schemes," the patent states. The fees could scale depending on the duration viewed as well as the inferred emotional state. View the patent below: [Images: USPTO, Flickr user Giuseppe Costantino]

Reconnaissance de gestes

Reconnaissance d'individu. First Google Glass use for real-time location of where multiple viewers are looking. “What you are seeing here is the first use of Google Glass and another device — an Android phone in this case — for real-time focus between multiple users,” CrowdOptic CEO Jon Fisher explained to KurzweilAI in an exclusive interview.

First Google Glass use for real-time location of where multiple viewers are looking

Left: two users (red dots) are looking at the Transamerica building (“22″) in San Francisco. Their intersecting lines of view form an instant cluster (right) of two members, which could be displayed on their devices — Google Glass, in this example. If they are Facebook friends, both might get an implicit “like” — to share photos, for example. (Credit: CrowdOptic) “This is real-time triangulation from GPS and compass data on both devices, using.a [forthcoming] app that locates the common point of focus,” he said.

CrowdOptic analyzes where people point their electronic devices to identify activity hot spots, engage users with contextual applications, and curate social media content. How CrowdOptic technology works (Credit: CrowdOptic) Examples. This Robo-Nose Can Smell Better Than You. Une nouvelle caméra numérique qui fait de l'œil aux libellules. Joggobot: The Flying Quadrocopter Jogging Companion. June 5th, 2012 by Range It’s not always easy staying motivated when you train alone.

Joggobot: The Flying Quadrocopter Jogging Companion

This is one of the reasons why I find the Joggobot interesting. It’s kind of like a robotic coach, that follows you around as you run. Based on the AR.Drone, the hovering Joggobot quadrotor will attract stares and looks from fellow runners, but it’s certainly easier to keep pace if your smartphone or GPS isn’t enough to keep you motivated. You just need to wear a t-shirt with a specially designed color pattern and the drone will follow you around. I hope that in the future you can equip it with some tasers so that you’ll have no choice but to keep running. [via New Scientist via DVice] Clock Tower Hotel Room: Alone Time Twig Combines Charger, Tripod and Stand for Your iPhone. Joggobot : Un AR.Drone qui vous tient compagnie pendant que vous faites du jogging. 10 Sensor Innovations Driving the Digital Health Revolution.

This year I.B.M dedicated its Five in Five series (an annual list of five technologies that are likely to advance dramatically) solely to sensors.

10 Sensor Innovations Driving the Digital Health Revolution

Digital sensors of the touch, sight, hearing, taste and smell kind along with their potential are all profiled by I.B.M. Sensor technology is going through a renaissance as companies develop smart and innovative new ways to track data using them. Sensor innovation is in-part driving the Digital Health Revolution as digital health companies find ingenius ways to integrate them in to apps, devices and other peripherals.

The smartphone will play an increasing important role in all of this as they go from having six built-in sensors currently to having sixteen in the next five years. If these predictions are correct then the next five years will be half-a-decade of sensor proliferation meaning the Digital Health Ecosystem will grow exponentially. 1. Picture credit: Rock Health 2. 3. Photo credit: MC10 5. 6. 7.First Warning Systems 8. 9. 10. Google Glass Advances with Superimposed Controls & More. Google's Patent Background Wearable systems can integrate various elements, such as miniaturized computers, input devices, sensors, detectors, image displays, wireless communication devices as well as image and audio processors, into a device that can be worn by a user.

Such devices provide a mobile and lightweight solution to communicating, computing and interacting with one's environment. With the advance of technologies associated with wearable systems and miniaturized optical elements, it has become possible to consider wearable compact optical displays that augment the wearer's experience of the real world. By placing an image display element close to the wearer's eye(s), an artificial image can be made to overlay the wearer's view of the real world.

Such image display elements are incorporated into systems also referred to as "near-eye displays", "head-mounted displays" (HMDs) or "heads-up displays" (HUDs). In a first aspect, a method is provided. AOptix Lands DoD Contract To Turn Smartphones Into Biometric Data-Gathering Tools. Smartphones may be invading pockets and purses across the world, but AOptix may soon bring those mobile devices to some far-flung war zones.

AOptix Lands DoD Contract To Turn Smartphones Into Biometric Data-Gathering Tools

The Campbell, Calif. -based company announced earlier today that it (along with government-centric IT partner CACI) nabbed a $3 million research contract from the U.S. Department of Defense to bring its “Smart Mobile Identity” concept to fruition. The company kept coy about what that actually means in its release, but Wired has the full story — the big goal is o create an accessory of sorts capable of attaching to a commercially-available smartphone that can capture high-quality biometric data— think a subject’s thumb prints, face/eye scans, and voice recordings. At first glance, it really doesn’t sound like that tall an order — smartphones are substantially more powerful than they were just a few years ago, and that’s the sort of trend that isn’t going to be bucked anytime soon.

专利之家-设计发明与创意商机 » 安全结实的伪飞艇飞机. Micro Drones Now Buzzing Around Afghanistan. British soldiers are testing out tiny 4×1-inch mini surveillance drones throughout Afghanistan.

Micro Drones Now Buzzing Around Afghanistan

“We used it to look for insurgent firing points and check out exposed areas of the ground before crossing, which is a real asset,” said Sgt. Christopher Petherbridge. The toy-looking Black Hornet Nano can fly for up to 30 minutes with a top speed of 22 mph, or a range of about one half-mile. U.K. -based Marlborough Communications landed a £20M ($31) contract to build a large hive of 160 buzzing helicopters. The drones were originally developed by Prox Dynamics for search and rescue missions and can be flown automatically using pre-programmed GPS coordinates. Now, I’m not a fortune teller, but something tells me a version of the Black Hornet Nano is going to be the hottest Christmas toy sometime in the near future. Leap Motion Augmented Reality Demo. Ce système de surveillance DVR HD de Samsung enregistre des vidéos de sécurité en haute définition.

PanaCast, la caméra offre le streaming HD avec un champ de vision de 200 degrés. Sony : Un "Kinect" basé sur le son - 27/12/2012. Robot Dragonfly - Gaming & Photography. Watch a short trailer www.TechJect.com Our prototypes have gone through multiple design cycles.

Robot Dragonfly - Gaming & Photography

We'll be offering a number of Apps that the users can download from Google Play and App stores to perform pre-defined operations like: Indoor mapping, automated patrolling and more. If you're an entrepreneur, you can literally kickstart your own Next-Gen application market using our Software Development Kit (SDK). If you are a researcher or a hobbist, skip to the next most versatile and compact platform to do your research. The Dragonflies are indistinguishable from an insect in the environment. With your help we can release these remarkable spy drones and let your experience the next level in amazing Robotic Applications What is the TechJect Dragonfly? The TechJect Dragonfly is a wifi-enabled, super-small, smart and energy efficient robotic insect; it can do amazing aerial photography, aerobatic maneuvers for gaming, autonomous patrolling for security and surveillance, and much more. 4.

Microsoft Invents Smart Walls for Next-Gen Homes & Offices. Microsoft has been working with large scale multi-touch displays for some time now.

Microsoft Invents Smart Walls for Next-Gen Homes & Offices

Their PixelSense projects, once under the branding of Surface, involve large scale interactive tables for the home and office. According to a new patent application that we recently discovered, it now appears that Microsoft has their eye on being the first to bring touch and haptics technology to future smart homes in the form of smart walls. First generation smart walls will allow users to interact with lighting and other types of controls that will be virtual based; meaning that they'll only appear when needed, leaving the walls unblemished when not in use. These new controls will deliver higher end haptics that will be able to provide users with a true sense of touching and controlling these new virtual controls. Microsoft Invents Smart Walls for Next Generation Homes Doesn't this sound like something you'd expect on a future tablet?

Video: Taking a Peek at one Aspect of Future Smart Walls. Ce cerveau interactif va répondre à toutes vos questions ! Samsung Invents Air-Gesturing Controls for Tablets & Beyond. A recently published Samsung patent application has revealed a new air-gesturing invention that could supplement or eliminate the need to actually touch a display in order to control its functionality. When using Samsung's air-gesturing techniques in conjunction with a Samsung HDTV, for example, the idea is to simply eliminate the need for a physical remote controller.

In the case with tablet computers, the user will have the option of turning on air-gesture functionality full time or for specific applications that the user assigns them to. Samsung's air-gesturing capabilities are made possible by utilizing a specialized camera or multiple cameras that incorporate ultrasonic signals and/or specialized motion sensors. The signals and/or sensors are able to create a virtual screen area well above the surface of the tablet display as clearly illustrated in our cover graphic. Why Apple Might Have a Hard Time Keeping Up With Google Maps. FBI launches $1 billion face recognition project - tech - 07 September 2012. The Next Generation Identification programme will include a nationwide database of criminal faces and other biometrics "FACE recognition is 'now'," declared Alessandro Acquisti of Carnegie Mellon University in Pittsburgh in a testimony before the US Senate in July. It certainly seems that way. As part of an update to the national fingerprint database, the FBI has begun rolling out facial recognition to identify criminals.

It will form part of the bureau's long-awaited, $1 billion Next Generation Identification (NGI) programme, which will also add biometrics such as iris scans, DNA analysis and voice identification to the toolkit. A handful of states began uploading their photos as part of a pilot programme this February and it is expected to be rolled out nationwide by 2014. Another application would be the reverse: images of a person of interest from security cameras or public photos uploaded onto the internet could be compared against a national repository of images held by the FBI. La frappe au clavier, un outil biométrique prometteur. Emploi : Tape sur ton clavier, je te dirais qui tu es. Throwable Panoramic Ball Camera // Jonas Pfeil. Face.com Brings Facial Recognition to the Masses, Now with Age Detection: Interview With CEO. Face.com's API now returns an age estimation for faces it detects in photos - seen here with some recognizable examples.

Looking at someone’s face can tell you a lot about who they are. Running a picture through Face.com‘s systems let’s you turn those instincts into cold hard data. The Israel-based company has made a name for itself over the past few years by providing some of the best facial recognition technology available on the web. To date, developers all over the world have used their API to find nearly 41 billion faces! More than just detecting and identifying faces, the Face.com API provides all sorts of great data: gender, presence of a smile, approximate mood, etc. Instead of trying to understand what makes someone look a certain age, Face.com simply let their program figure it out on its own.

Face.com does a pretty good estimation for this pic of Brad Pitt. In the end, though, the only really important thing to ask about Face.com’s Age Detection, is “does it work.” New Surveillance System Identifies Your Face By Searching Through 36 Million Images Per Second. When it comes to surveillance, your face may now be your biggest liability. Privacy advocates, brace yourselves – the search capabilities of the latest surveillance technology is nightmare fuel. Hitachi Kokusai Electric recently demonstrated the development of a surveillance camera system capable of searching through 36 million images per second to match a person’s face taken from a mobile phone or captured by surveillance. While the minimum resolution required for a match is 40 x 40 pixels, the facial recognition software allows a variance in the position of the person’s head, such that someone can be turned away from the camera horizontally or vertically by 30 degrees and it can still make a match.

Furthermore, the software identifies faces in surveillance video as it is recorded, meaning that users can immediately watch before and after recorded footage from the timepoint. The power of the search capabilities is in the algorithms that group similar faces together. [Media: YouTube] Face.com : une API qui détecte l’âge. This Futuristic Camera Can See Around Corners Using Lasers. Got Lazy Fingers? Now You Can Play Pong With Your Eyes. You Can Crash This RC Helicopter as Many Times as You Want.

Reconnaissance d'états corporels

Watch This: POV Aerial Shots Taken with $12k Copter. MIT robot plane deletes the pilot. Flying Robotic Swarm of Nano Quadrotors Gets Millions of Views, New Company. Festo BionicOpter Robot Dragonfly Makes Quadcopters Look Clumsy. Seed drone Samarai swarms will dominate the skies [Video] Les quadricoptères à la rescousse des policiers. Les micro-drônes gagnent en réactivité grâce au MIT. LA100 : un drone capable de prendre automatiquement des photos d’en-haut. Conscience et Omniscience. Reconnaissance du comportement.