Next gen Interfaces UX HMI
Get flash to fully experience Pearltrees
Touchscreen gestures on gadgets have become second nature to many of us — so much so that even babies have tried to swipe physical magazines and books , before realizing they don't work that way. In an era when kids become intimately familiar with tablet and smartphone devices at a young age , designer Gabriele Meldaikyte captured today's touchscreen gestures in analog form. As shown in the video above, Meldaikyte's mixed-media exhibit reimagines the language of smartphone communication as sculptures; there's pinching, tapping, scrolling, flicking and swiping. Meldaikyte is currently completing her MA in Design Products at London’s Royal College of Art. Although touchscreen gestures are common today, there could be a shift towards more intuitive ways of control such as voice command (e.g. Google's Project Glass ).
Gesture-recognition will make the mouse obsolete. FORTUNE -- For more than 40 years the mouse has been a blunt tool for communication with our computers. We grasp, we click, we awkwardly move a cursor around a screen. Then, four years ago, smartphones arrived en masse, followed by touch tablets, and the communication gap between man and machine narrowed (and very young children became savvy computer users). Touch is good: We naturally communicate with our hands, so what better human quality to translate into a stream of zeros and ones for computers to process? But touch is along a two-dimensional plane.
Mainstream computer interfaces are tough to get right, because they have to be everything to everyone--which is impossible. Even something as "no duh" as a touch screen is going to make someone, somewhere, gripe that it’s not quite right for them . Jay Silver and Eric Rosenbaum of the MIT Media Lab have come up with a solution to this problem that’s so weird it just might be perfect: MaKey MaKey , a kit that lets you turn any object--food, toys, clothes, whatever--into an ultra-personalized UI.
By now, many of us are aware of the Leap Motion, a small, $70 gesture control system that simply plugs into any computer and, apparently, just works. If you’ve seen the gesture interfaces in Minority Report, you know what it does. More importantly, if you’re familiar with the touch modality – and at this point, most of us are – the interface is entirely intuitive. It’s touch, except it happens in the space in front of the screen, so you don’t have to cover your window into your tech with all those unsightly smudges.
Most of us have adjusted to life with touchscreens. They lack tactile feedback, the rubber nubs that enable thoughtless use of our television remotes, but touchscreens create dynamic virtual buttons and open up vital screen real estate. They’re worth the thumb-numbing tradeoff. But what if we could have both, a dynamic touchscreen with real buttons? Impossible? Not at all.
Nowadays, our gadgets meld seamlessly into our lives straight out of the box. Once charged up, we can make sense of them after a few minutes of exploratory button-pushing. The horror of VCR programming seems like a faint memory, thanks in large part to Steve Jobs and Apple, whose intuitive user interfaces (UIs) have informed everything from thermostats to social media sites. And of course, the iPad has spawned a half a million apps, the more outstanding of which are included in this year’s list.
Ever since I was very young, I’ve been fascinated by the tools that people use in science fiction movies. I was obsessed with the design of starships and their controls. The Millenium Falcon’s cockpit; how exactly a tricorder worked and what exactly was Spock looking at in that hood thing on the science station? I even had Michael Okuda’s Technical Manual , explaining in intricate detail how the Enterprise NCC-1701-D was constructed. So, when I came across Jayse Hansen’s post about his work on the user interfaces in the movie The Avengers , I knew I had to reach out to talk with him a bit about how he does what he does. Jayse is a freelance visual artist working remotely from Las Vegas, NV for film and television.
On Friday, 30 March 2012, I had the esteemed privilege of sharing my knowledge in Mobile User Experience at the UX Masterclass Conference alongside a UX (user experience) panel of experts from around the globe including delegates from Australia, UK, USA, France, Switzerland, Ireland, Italy, Russia and Canada. For me the event proved to be an affirmation of what Prezence Digital believes. The presentation that I delivered can be summarised as follows: There is a distinct difference between Web UX and Mobile UX.
LAS VEGAS – Don't trash your keyboard and mouse just yet. But three companies at the International Consumer Electronics Show demonstrated depth-sensing cameras that let you to control your computer by moving your hands or body. Microsoft's Kinect add-on for the Xbox 360 console has already popularized these cameras for gaming.
Posted by Katy Ryan Schamberger on December 8, 2011 · 10 Comments The Google team has been busy with a host of new design changes, including the implementation of a large, clickable Google button—called the Google bar —that now appears in the top left corner of any Google page. When you click the button, you’re presented with a drop-down menu that gives you a variety of quick links to various Google sites: Google+, search, images, maps, YouTube, docs and more.
I don't know if you saw The evolution of Google search video, which they've published a few days ago . You should, it's a cool movie, portraying the history of search and Google's vision of its future. But something went wrong. One of the punchlines of the video was a story from one of the engineers, who said that next-generation search engines will be able to answer complex questions such as the following:
Starting with the handheld controllers introduced by the Nintendo Wii console in 2006, gamers have been able to control computers by making gestures in the air rather than with joysticks, game pads, or keyboards. Microsoft brought the technology to the next level in 2010 with the release of the Kinect, allowing Xbox consoles to be operated without any controllers at all: arm and body motions suffice. Now gestural interfaces are beginning to spread to other areas. In particular, they have the potential to change the way consumers interact with their televisions. The first demonstrations of what gestural interfaces could offer beyond gaming came from enterprising hackers who did things like using a Wii controller to steer a Roomba robotic vacuum, and academic researchers like those in Microsoft’s labs who adapted the Kinect to do things such as creating a 3-D model of a user’s whole body.
Come on, smartphone! Let's do the twist! At the Nokia World Conference in London , where the Finnish company unveiled its newest line of handsets running the Windows Phone OS, there is a booth called "Nokia Future Technology."
We think of movies as linear progressions. It’s generally a story with a beginning, middle, and end--and it’s always something we consume from start to finish. Timo Arnall of Berg shows us all just how dated this view of video has become. In a project for Bonnier and Mag+ , which I’ve dubbed “ cinema glass ,” he turns a movie into a swipeable, interactive entity on a tablet. And I don’t just mean that you can pause it or fast forward in some clever way.
<img class="alignnone size-full wp-image-78519" title="Braille CU 3" src="http://www.wired.com/images_blogs/gadgetlab/2011/10/Braille-CU-3.jpg" alt="" width="660" height="371" /> One group of people has traditionally been left out of our modern tablet revolution: the visually impaired. Our slick, button-less touchscreens are essentially useless to those who rely on touch to navigate around a computer interface, unless voice-control features are built in to the device and its OS. But a Stanford team of three has helped change that.