background preloader

Kinect project

Facebook Twitter

WPF Audio Player. Download source code - 378 KB Introduction When writing a desktop application, it sometimes becomes necessary to play some audio files. .NET/WPF comes with two classes trying to achieve this goal: SoundPlayer and MediaPlayer Unfortunately, both classes come with some (severe) limitations that make them hard to use under certain (not so uncommon) circumstances. This article will provide a replacement for both classes. It'll also provide some more details on the limitations and problems associated with these two classes. Using the code Before going into more detail, let's jump ahead and take a look at the final class.

AudioPlayer myAudioPlayer = new AudioPlayer(...); myAudioPlayer.Play(); This simply plays the audio file. Then you can create a AudioPlayer instance like this: AudioPlayer myAudioPlayer = new AudioPlayer(Assembly.GetExecutingAssembly(), "MyRootNamespace", "myfolder/myfile.mp3"); Besides Play(), AudioPlayer contains at lot of other useful stuff. Behind the scenes Let's get started. Switching from Kinect to Asus Xtion Pro Live | Autoponics. We got our new structured-light 3D sensor, the Xtion Pro Live by Asus! Just like the Kinect, but better specs. When Microsoft’s Xbox Kinect was adopted by the hobbyist community, companies took notice. Microsoft released their Kinect for Windows platform, which includes an impressive SDK that exposes high-level object and body recognition to developers. Asus has now jumped on the bandwagon, updating their Xtion Pro depth sensor with the Live edition, a nearly identical sensor to the Kinect. The two sensors are almost the same, but here are some important differences.

Also, the Asus is about half the size and weight of the Kinect. All of these differences are significant for our research and led us to switch. So there you have it. For more info on the Asus, take a look at I Heart Robotics’ unboxing and partial teardown . Human echolocation. Human echolocation is the ability of humans to detect objects in their environment by sensing echoes from those objects. By actively creating sounds – for example, by tapping their canes, lightly stomping their foot, snapping their fingers, or making clicking noises with their mouths – people trained to orient by echolocation can interpret the sound waves reflected by nearby objects, accurately identifying their location and size.

This ability is used by some blind people for acoustic wayfinding, or navigating within their environment using auditory rather than visual cues. It is similar in principle to active sonar and to animal echolocation, which is employed by bats, dolphins and toothed whales. Background[edit] Mechanics[edit] By understanding the interrelationships of these qualities, much can be perceived about the nature of an object or multiple objects. For example, an object that is tall and narrow may be recognized quickly as a pole. Daniel Kish[edit] Ben Underwood[edit] Dr. Dr. Blind people echolocate with visual part of brain - Technology & Science. Blind people who navigate using clicks and echoes, like bats and dolphins do, recruit the part of the brain used by sighted people to see, a new study has found. While few blind people use echolocation — emitting a sound and then listening for the echo to get information about objects in the surroundings — some that do are so good at it that they can use the ability to hike, mountain bike and play basketball, said Melvyn Goodale, one of the co-authors of the study published Wednesday in PloS One.

About the echolocators Daniel Kish, 43, went blind at the age of 13 months from retinoblastoma, the same eye cancer that affected the late Canadian musician Jeff Healey. Melvyn Goodale says Kish can't remember a time when he didn't echolocate, and seems to have taught himself at a very young age. "His parents say that when he was about 18 months old, they noticed he was making these clicking noises. " "They can tell a flat thing from convex. No special activity in hearing part of brain. The Blind Man Who Taught Himself to See.

The first thing Daniel Kish does, when I pull up to his tidy gray bungalow in Long Beach, California, is make fun of my driving. "You're going to leave it that far from the curb? " he asks. He's standing on his stoop, a good 10 paces from my car. I glance behind me as I walk up to him. I am, indeed, parked about a foot and a half from the curb. The second thing Kish does, in his living room a few minutes later, is remove his prosthetic eyeballs. He does this casually, like a person taking off a smudged pair of glasses. Kish was born with an aggressive form of cancer called retinoblastoma, which attacks the retinas. He knew my car was poorly parked because he produced a brief, sharp click with his tongue. But not silent. Bats, of course, use echolocation. This is not enough for him. Kish preaches complete and unfettered independence, even if the result produces the occasional bloody gash or broken bone. HRTF Measurements of a KEMAR Dummy-Head Microphone.

Bill Gardner and Keith MartinMIT Media Lab Abstract: An extensive set of head-related transfer function (HRTF) measurements of a KEMAR dummy head microphone was completed in May, 1994. The measurements consist of the left and right ear impulse responses from a Realistic Optimus Pro 7 loudspeaker mounted 1.4 meters from the KEMAR. Maximum length (ML) pseudo-random binary sequences were used to obtain the impulse responses at a sampling rate of 44.1 kHz. Keith, KEMAR, and Bill in MIT's anechoic chamber. The ftp and html archive is maintained by Bill Gardner, billg@media.mit.edu and Keith Martin, kdm@media.mit.edu. The archive contains: README (1K) - A brief description of the archive KEMAR-FAQ.txt (25K) - Frequently Asked Questions - read this compact.tar.Z (200K) - Compact HRTF measurements compact.zip (226K) - Compact HRTF measurements, Windows zip file full.tar.Z (1.3M) - Complete set of HRTF measurements, speaker and headphone responses hrtfdoc.txt (18K) - Text-only version of hrtfdoc.ps.

Www.cs.cmu.edu/~robust/Papers/SternWangBrownChapter.pdf. Interface.cipic.ucdavis.edu/data/doc/CIPIC_HRTF_Database.pdf. Virtual Barbershop. Uden om øret: Nye hovedtelefoner sender lyden ind gennem huden. Fakta Forbrugerelektronikmessen CES (Consumer Electronics Show) i Las Vegas i staten Nevada i det sydvestlige USA (officielt 150.000 gæster, 2013) og forbrugerelektronikmessen IFA i Berlin i Tyskland (Internationale Funkausstellung Berlin, 240.000 gæster, 2012) kæmper om titlen som verdens største messe for forbrugerelektronik.

Men det er svært at sammenligne vigtigheden af de to, for mens messen i Las Vegas i den første uge af januar udelukkende er for branchefolk og medier og altså er lukket for offentligheden, er IFA-messen i Berlin i den første uge af september hvert år et offentligt tilløbsstykke med blandt andre mange danske besøgende udover folk fra branchen og medierne. Indholdsmæssigt er der ikke stor forskel på de to messer, men generelt har nyhederne på messen i Las Vegas længere vej til hylderne i de danske butikker end nyhedernepå IFA i Berlin. Et eksempel er B&O og i en vis udstrækning hollandske Philips, hvis strategi er at lancere sine nyheder først i Europa på IFA. Www.psych.ucsb.edu/~loomis/loomisgolledgeklatzky01.pdf.

Visual impairment and blindness. A Very Simple Example of Data Binding in WPF. Download MyWPFdataBinding - 58.61 KB Introduction The importance of data binding in Windows Presentation Foundation, or WPF, application development is undeniable. Yet, most articles are so complex that understanding the basics of this technology can be daunting. This complexity arises from simple beginnings that quickly become complex. This article focuses on data binding between controls, and that's it. Background WPF is an important step forward to developing effective graphical user interfaces. What does it do? As the user types in the textbox control, the data is displayed in a label control right next to it automatically. Figure 0: The user interface where the label control on the right gets its data from the textbox control on the left. Using the code The code in Figure 2 represents two controls: a textbox and a label, displayed in a Window. Content="{Binding ElementName=textBox1,Path=Text}" 'ElementName' is a reserved word, as 'Binding' is.

Points of Interest Next Step. Www.nbdtech.com/Free/WpfBinding.pdf. Installing and Using the Kinect Sensor | Kinect for Windows Quickstart Series. Kinect Developer Downloads | Kinect for Windows. How To: Kinect for Windows SDK Face Recognition Series | Coding4Fun Kinect Projects. Today's How To series is from the one and only elbruno and walks us through the Kinect for Windows SDK Facial Recognition capability, from concept to code... Buenas,One of the innovations that were incorporated into the Kinect SDK 1.5 was the ability to detect points of the face. Be careful, some people think that this can give us the ability to perform facial recognition, then not. What we can do is to identify more than 80 points of a face and from the same…. well what we want. A model that we could have reference is as follows:If we are lucky to have eastern ancestry, because a face recognition points will be similar to the previous photo.

If on the other hand are ugly like myself, since we will see something similar to the following: And how facial recognition? Part II Today we will see how to use this functionality in a WPF project. Part III Today we will see how to modify the “network” that is drawn in our face using the example from the previous post. Contact Information: