background preloader

This Gizmo Lets You Draw A UI On Paper, Then Turns It Into A Touch Screen

This Gizmo Lets You Draw A UI On Paper, Then Turns It Into A Touch Screen
You know those huge multichannel mixers--the massive boards that audio engineers manage during concerts to control everything from sound to lights? It’s the sort of highly specialized hardware that the average person would never come into contact with, because why would they? But what if you could just draw it? That’s the idea behind the SketchSynth, by Carnegie Mellon student Billy Keyes. “Ever since I was little, I’ve been fascinated by control panels,” Keyes explains on his blog. His approach is a compromise between boundless childhood imagination and human factors of practicality: He designed three distinct controls that anyone could draw. A simple webcam picks up the shapes and sends them to a computer, then, a projector actually lays extra data on top of the drawing, like virtual nubs to control the sliders. And while this demo is clearly pretty basic, this principle could easily scale, adding all sorts of complex music visualizers to a user’s basic control diagram.

Leap Aims to Make Motion Control Ubiquitous -- and Awesome You know that bit in The Avengers where Tony Stark spreads his fingers apart in mid-air and the stuff on the screen in front of him instantly appears on displays throughout the room? A company called Leap Motion wants to make that kind of gesture control a reality, and it hopes to take the first step with a new type of motion controller. The Leap is a simple motion controller that you can plug into any USB port on your computer. Once it's plugged in and you've installed the Leap software, it turns the 8-cubic feet of air in front of it into "3D interaction space" — basically, it'll track any and all motion within that space, letting you use your hands to do whatever you could do with a mouse. How is that different from Microsoft Kinect? Precision — the company claims the Leap is 200 times more sensitive than current gesture-based tech, able to track movements down to a hundredth of a millimeter. Can't wait to try it out? How do you like the Leap?

Leap 3D motion control system is 100 times more accurate than Kinect, will cost $69.99 Motion control startup Leap Motion has demoed its Leap 3D motion control system, which can track motion to around 0.01mm accuracy — 100 times more accurate than the Kinect. Rather than taking Microsoft's approach, Leap Motion creates a personal 3D workspace of about four cubic feet. The Leap consists of a small USB device with industry-standard sensors and cameras that, in tandem with the company's software, can track multiple objects and recognize gestures. In a demo given to CNET, Leap's designers showed off OS navigation and web browsing using a single finger, writing, pinch-to-zoom, precision drawing, 3D modeling, and gaming. Although Leap Motion is a startup, it has significant funding behind it, and the system is scheduled to launch early next year at $69.99.

Kinect for Windows 1.5 released with '10-joint' skeletal tracking and Kinect Studio Microsoft is improving the functionality of its Kinect for Windows hardware this week with the release of a 1.5 version of its Software Development Kit (SDK). The latest Kinect for Windows 1.5 software includes support for a new "seated" 10-joint skeletal tracking system to enable developers to track the head, neck, and arms of seated and standing users in default and near mode. Kinect for Windows 1.5 also introduces a Kinect Studio developer tool. Kinect developers can use the tool to record and playback Kinect data in their applications to assist with debugging and app improvements. Microsoft is also supporting additional languages with its 1.5 release, including French, Spanish, Italian, and Japanese speech recognition.

Beyond Kinect: Gestural computer spells keyboard death - tech - 15 May 2012 THE advent of multi-touch screens and novel gaming interfaces means the days of the traditional mouse and keyboard are well and truly numbered. With Humantenna and SoundWave, you won't even have to touch a computer to control it, gesturing in its direction will be enough. These two technologies are the latest offerings from Microsoft, who gave us the Kinect controller. As the name suggests, Humantenna uses the human body as an antenna to pick up the electromagnetic fields - generated by power lines and electrical appliances - that fill indoor and outdoor spaces. By studying how the signal changes as users move through the electromagnetic fields, the team was able to identify gestures, such as a punching motion or swipe of the hand. One version of the system, presented this week at the Conference on Human Factors in Computing Systems in Austin, Texas, runs off a sensor that sits in a small bag. All sorts of applications would open up if Humantenna can be commercialised. More from the web

SoftKinetic announces pocket-sized DepthSense 325 gesture recognition camera kit It seems like just yesterday that we were sampling SoftKinetic's gesture control systems at CES, but the company is back today with its latest developer-targeted hardware. The DephSense 325 is a pocket-sized camera (SoftKinetic claims it's the smallest such device in the world) that can "see" in high-definition 2D and also analyze 3D depth. Whereas the DS311 we played with in January could detect finger movement from as close as 15 centimeters, the company has managed to shrink that distance even further to just 10cm. That's something Microsoft can't say about its current Kinect hardware, though we'll need to see SoftKinect's technology in action to get a sense of how accurate tracking is. Dual microphones are also integrated within the DS325 for audio-based interaction. As for how the DS325 might be used in the real world, the company cites theoretical examples that include video games, "edutainment" applications, video conferencing, online shopping, and social media implementations.

SoftKinetic's ten-finger virtual puppet show demo video walkthrough Microsoft's Kinect for Windows wasn't the only gesture control system to tout "near mode" this CES. SoftKinetic offers an alternate solution with an eye towards OEMs, and this week it released a public alpha of new firmware for its DepthSense 311 that it claims will detect finger movement from as close as 15cm (vs. Kinect's 50cm) and as far away as about three feet. Those numbers seems about right; SoftKinetic let us try out the firmware first-hand in two applications — a barebones tech demo that showed exactly what the software was detecting, and a "puppet show" app that let you control two cartoon puppets with ragdoll arms — and detection seemed to work fine within the stated range. The puppets could to and fro, nod their head, twist around, and open their mouths when you un-balled a fist. We also had a rather brief look at the upcoming DS320, with a higher-resolution QVGA depth sensor, HD video, and a wider 73-degree field of view.

Implanted User Interface Gives Patients New Options - Healthcare - Clinical Information Systems Placed just under the skin, implanted UIs could accept touch inputs, giving users with implanted medical equipment such as pacemakers more control over their device's operation. (click image for larger view) Slideshow: 12 Advances In Medical Robotics Pacemakers and other implanted medical devices have become commonplace. But being able to directly interact with implants--via user interfaces that are implanted as well--still might strike some as science fiction a la the Terminator. Researchers who are testing implanted user interfaces say the appliances will enable people who have implanted medical devices such as pacemakers to recharge and reprogram them without the use of wireless transmissions, which are considered vulnerable to hacking. "So far, people have only been able to get those implants checked by making a trip to a physician or by interacting with wireless technologies such as Bluetooth. [ Is it time to re-engineer your clinical decision support system? More Insights

Laser System Paints Information on the Road Ahead On the road: A mock-up shows a driver’s view of Microvision’s head-up display. Head-up displays, which project visual data onto the windshield and the driver’s view of the road, are debuting in a growing number of car models. But more vibrant, compact, and efficient displays being developed by Microvision, a company based in Redmond, Washington, could help the technology become much more common. Japan’s Pioneer Corporation plans to release its first head-up display product based on Microvision’s novel technology this year. The company’s head-up display is already in some concept cars but has so far been too costly for commercial models, says Evans. Most existing head-up displays generate images using LCDs. Microvision’s system uses a set of three lasers—red, green and blue—and a single, millimeter-wide silicon mirror that tilts on two axes. The final cost of Microvision’s product will hinge on the price tag of advanced green lasers. “Green lasers alone are $200 each now,” he says.

Founders of Leap Motion: Our Amazing 3D Tracking Will Be Everywhere In the short time since its debut, the Leap Motion has inspired zombie-like devotion in many gadget lovers, but can the device live up to the hype? (Yes, yes it can). In the past few weeks the Leap Motion device has sent shudders of delight through gadget lovers and computer designers alike by promising a new kind of ultra-accurate, and very cheap, optical 3D tracking for your desktop or laptop computer. Forget the Kinect, Leap Motion is cheaper ($70), more precise (down to 0.01 mm), and much smaller (think “pack of gum” proportions). For those who missed earlier coverage of the Leap Motion’s debut here’s the official promo video for the 3D tracking device: While at their HQ, Singularity Hub got a lot of raw footage of the device in action. If someone can watch the demonstration of Leap Motion and not feel their jaw dropping they probably don’t understand how extraordinary this technology really is. Part of that base-experience is the 3D motion tracking that you see in the demos.

O2Amp, eyewear, vein finder, trauma detector, health monitor How New 'Mood Ring' Glasses Let You See Emotions Evolution has tailored the human eye for detecting red, green, blue and yellow in a person's skin, which reveals areas where that person's blood is oxygenated, deoxygenated, pooled below the surface or drained. We subconsciously read these skin color cues to perceive each other's emotions and states of health. Rosy cheeks can suggest good health, for example, while a yellowish hue hints at fear. Now, researchers have created new glasses, called O2Amps, which they say amplify the wearer's perception of blood physiology , augmenting millions of years of eye evolution. "Our eyes have been optimized to sense spectral changes in skin color," said Mark Changizi , an evolutionary anthropologist and director of human cognition at 2AI Labs in Boise, Idaho . Based on Changizi's color perception research, he and his colleagues have designed three versions of O2Amps, which are currently being sold to medical distributors and will hit wider markets in 2013. "If you're angry, you get red.

Samsung's flexible, 'unbreakable' AMOLED displays to be called 'Youm' Samsung's upcoming flexible AMOLED displays now have a name: "Youm." The branding comes from a new page on the Korean Samsung Mobile Display website, which offers little else other than the new name. The company has entered an application to the US Patent and Trademark Office for Youm, and the logo below is included in the filing. Other than the name, Samsung's site offers a quick comparison between the new technology, LCD, and OLED.

These Contact Lenses Give You Super-Human Vision Contact lens wearers can get a new perspective on their environment with the latest vision technology — contact lenses that let the wearer focus on two fields of view at once. Wearers can keep an eye on a projected image while their surrounding scenery can also be looked at — resulting in superhuman vision. The human eye on its own can only focus on one distance at a time. The contact lenses, however, will let two images be viewed at the same time. And, while most of the world is fascinated with Google Glasses, the Pentagon is focused on getting a supply of these contacts, called iOptik. In addition to contact lenses, the company also makes glasses with projection images in the lenses, which it showed off at CES 2012. Futurist tech has been used to address a number of health concerns, as well as expand on the human body's natural capabilities. Would you wear these lenses?

Related: