Eye Tracking Research and Human-Computer Interaction Eye tracking has long been used to analyze user behavior and user interface usability in a wide range of human-computer interaction (HCI) research and practices and as an actual control medium in a human–computer dialogue. Eye tracking to analyze user behavior and usability When analyzing user behavior and usability, user eye movements during systems interaction are recorded and later analyzed. Eye movements provide objective data on the physiological and perceptual impact of interaction. Eye tracking measurers are seldom used in isolation, but together with other physiological measures and qualitative methods. Eye tracking is commonly used to test usability of websites, software, computer games, interactive TV, digital map interfaces, mobile devices and other physical devices. Below is a heatmap from the interactive TV format, The Space Trainees. iDTV Lab at Åbo Academy in Finland tested the format using eye tracking. Read more about usability testing with eye tracking here.
Invention Awards: A Real-Life Babel Fish For the Speaking Impaired The Audeo captures electronic signals between the brain and vocal cords and synthesizes clear, spoken words By Lisa Katayama Posted 05.20.2009 at 10:53 am Neck Talk Electrodes on the throat pick up electrical signals between the brain and the vocal cords John B. Carnett Today's featured Invention Awards winner is the Audeo, a voice synthesizer that gives back the ability to speak to those with vocal cord or neurological damage. When Michael Callahan was 17, he lost his short-term memory when he hit his head in a skateboarding accident. When we speak, three basic things happen: the lungs deliver air, the vocal cords vibrate to create sound, and the mouth moves. Here's how it works: Three pill-size electrodes on the throat pick up electrical signals generated between the brain and the vocal cords. Callahan started working on the Audeo at the University of Illinois, studying everything he could about signal processing and neuroscience. The technology does have room to improve.
LED Lights Make Augmented Vision a Reality | Elemental LEDucation LED Lights Make Augmented Vision a Reality Okay, this is just freaky. We know LED lights are versatile enough to be used for practically anything, but LED contact lenses? Really?! Once miniature green LEDs are developed (and they’re in the works, as of now), full color displays will be possible. Lead researcher Babak Parvis comments “You won’t necessarily have to shift your focus to see the image generated by the contact lens,” it would just appear in front of you and your view of the real world will be completely unobstructed when the display is turned off. Ah, the real world. Thanks to Extreme Tech for the quote and Trendhunter for the images. By the way, these freaky LED contact lenses may still be a product of the future, but a lot of cool LED products are of the present!
Brain Hacking: Scientists Extract Personal Secrets With Commercial Hardware Chalk this up to super-creepy: scientists have discovered a way to mind-read personal secrets, such as bank PIN numbers and personal associations, using a cheap headset. Utilizing commercial brain-wave reading devices, often used for hands-free gaming, the researchers discovered that they could identify when subjects recognized familiar objects, faces, or locations, which helped them better guess sensitive information. Security interrogators could benefit most immediately from the new brain hacking technique, since it would reveal when suspects are actually familiar with the face of a potential accomplice. As for bank information, scientists could guess the first PIN number only 40% of the time. Brainwave-reading devices, which control computers hands-free, have become increasingly popular for entertainment, control of prosthetics for paralyzed individuals, and military application. With refinement, the researchers imagine that the brain-hacking technique will get more accurate.
This Amazing 3-D Desktop Was Born at Microsoft | Wired Business SpaceTop, a 3-D desktop environment you can reach into, was shown at the TED conference today by Jinha Lee, who developed the system during and after his internship at Microsoft Applied Science. Photo: TED/Flickr LONG BEACH, California – The history of computer revolutions will show a logical progression from the Mac to the iPad to something like this SpaceTop 3-D desktop, if computer genius Jinha Lee has anything to say about it. The Massachusetts Institute of Technology grad student earned some notice last year for the ZeroN, a levitating 3-D ball that can record and replay how it is moved around by a user. Now, following an internship at Microsoft Applied Science and some time off from MIT, Lee is unveiling his latest digital 3-D environment, a three-dimensional computer interface that allows a user to “reach inside” a computer screen and grab web pages, documents, and videos like real-world objects. More advanced tasks can be triggered with hand gestures. Click to enlarge.
The Wildly Ambitious Quest to Build a Mind-Controlled Exoskeleton by 2014 | Wired Science The feet of a monkey-sized prototype of the exoskeleton in Nicolelis’s lab at Duke. Photo: Nick Pironio/Wired Neuroscientist Miguel Nicolelis went on The Daily Show in 2011 and told Jon Stewart that he would develop a robotic body suit that would allow paralyzed people to walk again simply by thinking about it — and he’d do it in just 3 or 4 years. It was an audacious, some might say reckless, claim. But two years later, Nicolelis insists he’s on track. And he hopes to prove it in brazen fashion in front of billions of people during one of the world’s most-watched events: the World Cup. The tournament, which will be held in his native Brazil, is less than 16 months away. This may sound incredible, but in recent years, research on using signals from the brain to operate machines has taken great strides. But Nicolelis was brimming with confidence in January when I visited his lab at Duke University to see how his work is progressing. Miguel Nicolelis. Nicolelis thinks he can do much better.
Microsoft’s Perceptive Pixel premise: The future of touch computing isn’t stuck in your pocket When Microsoft purchased the Perceptive Pixel company, maker of 55 and 82 inch touchscreen displays that go by the eponymous acronym PPI, what it intended to do with the firm wasn’t completely clear. At the time, Microsoft stated the following: The acquisition of PPI allows us to draw on our complementary strengths, and we’re excited to accelerate this market evolution [...] PPI’s large touch displays, when combined with hardware from our OEMs, will become powerful Windows 8-based PCs and open new possibilities for productivity and collaboration. For more context, here’s how TNW reported Microsoft CEO Steve Ballmer’s announcement of the purchase: According to Ballmer, Microsoft will [utilize] its research, development and production of multi-touch technologies to further upcoming software and hardware, with the company showing off its huge 82-inch touch-enabled screen at the WPC event. And the prices are coming down. Here’s one in action with someone that you are familiar with: Windows 8
A sensational breakthrough: the first bionic hand that can feel - News - Gadgets & Tech The patient is an unnamed man in his 20s living in Rome who lost the lower part of his arm following an accident, said Silvestro Micera of the Ecole Polytechnique Federale de Lausanne in Switzerland. The wiring of his new bionic hand will be connected to the patient’s nervous system with the hope that the man will be able to control the movements of the hand as well as receiving touch signals from the hand’s skin sensors. Dr Micera said that the hand will be attached directly to the patient’s nervous system via electrodes clipped onto two of the arm’s main nerves, the median and the ulnar nerves. This should allow the man to control the hand by his thoughts, as well as receiving sensory signals to his brain from the hand’s sensors. “This is real progress, real hope for amputees. “It is clear that the more sensory feeling an amputee has, the more likely you will get full acceptance of that limb,” he told the American Association for the Advancement of Science meeting in Boston.
A New Flexible Keyboard Features Clickable Buttons A very thin keyboard that uses shape-changing polymers to replicate the feel and sound of chunky, clicking buttons could be in laptops and ultrabooks next year. Strategic Polymers Sciences, the San Francisco-based company that developed the keyboard, is working on transparent coatings that would enable this feature in touch screens. Today’s portable electronics provide rudimentary tactile feedback—many cell phones can vibrate to confirm that the user has pressed a button on a touch screen, for example. These vibrations are produced by a small motor, meaning the entire phone will move rather than just the appropriate spot on the screen where the button is, and there can be a lag in response time. “It’s amazing how fast software has grown to compensate for problems with touch screens—and sometimes you still text a word that’s the opposite of what you mean,” says Christophe Ramstein, CEO of Strategic Polymers.
Adobe announces first hardware, the Project Mighty smart stylus and Napoleon ruler Adobe has just announced its first hardware initiative, a pressure sensitive stylus and an electronic ruler that will tightly integrate with its software applications. The company's Project Mighty stylus and Napoleon ruler have been showcased connecting to an iPad and iPhone over Bluetooth. The pen works much like existing styli, but when working alongside Napoleon, the two tools can be used to create curved and angled shapes in a way that would be difficult to do with a third-party stylus. So far, the tools have only been demonstrated working with an unreleased app, which Adobe told us was created specifically for the hardware. Both Project Mighty and Napoleon appear to be small, simple pieces with an aesthetic reminiscent of the white and silver of early iPod models. The stylus has a single button, and the ruler is marked with a series of shapes that can be switched between to alter how the pen is drawing. Cloud connection comes to hardware
How Adobe Reinvented The Pen To Draw On The Internet This week, Adobe announced that the Creative Suite was becoming the subscription-based Creative Cloud. It didn’t go so well. But amidst the bad news, we may have lost sight of Adobe’s rationale for pushing the cloud beyond profits. On one hand, it’s just an aluminum stylus that can replace your finger on the iPad screen. Now you have to admit, that’s at least a little bit intriguing. How It Came Together Project Mighty, along with an accompanying “short ruler” codenamed Napoleon, were both designed by Ammunition (and engineered by Mindtribe). “One of the goals of this was just to make a beautiful, sweet object,” Ammunition Founder Robert Brunner tells Co.Design. The pen has a pressure-sensitive tip, a button to reveal onscreen menus and a glowing tip to convey modal information (designating if you’re drawing with any particular settings), and that’s it. Even still, why did the team pursue a pen and ruler at all? “It’s simply because they’re extremely familiar,” Brunner says.
Steered by thoughts, drone flies through hoops - tech - 05 June 2013 Video: Thought-controlled quadcopter drone It was only a matter of time before it became possible to control a drone with mere thoughts. In a gymnasium in Minneapolis, Minnesota, an AR.Drone quad-rotor helicopter made by French firm Parrot has been zooming right and left, up and down, and even through hoops as its pilot merely thinks of concepts related to such directions. Bin He and colleagues at the University of Minnesota, who developed the rig, are not trying to use mind control to launch precision drone strikes. , wheelchairs or bionic prosthetic limbs . Drones have been piloted with low-resolution, 14-electrode gaming electroencephalography (EEG) headsets before , but the Minnesota team are claiming a first in that they use an EEG headset with 64 electrodes peppered across the pilot's scalp. Ready for prime time Their trick was to come up with a very distinctive thought for each desired motion. Journal reference: Journal of Neural Engineering , DOI: 10.1088/1741-2560/10/4/046003
Contact Lens Computer: Like Google Glass, without the Glasses For those who find Google Glass indiscreet, electronic contact lenses that outfit the user’s cornea with a display may one day provide an alternative. Built by researchers at several institutions, including two research arms of Samsung, the lenses use new nanomaterials to solve some of the problems that have made contact-lens displays less than practical. A group led by Jang-Ung Park, a chemical engineer at the Ulsan National Institute of Science and Technology, mounted a light-emitting diode on an off-the-shelf soft contact lens, using a material the researchers developed: a transparent, highly conductive, and stretchy mix of graphene and silver nanowires. A handful of companies and researchers have developed electronic contact lenses over the past five years. Park wants to make contact lenses that have all the functions of a wearable computer but remain transparent and soft.