background preloader

Artificial senses

Facebook Twitter

Machine olfaction. Machine olfaction is the automated simulation of the sense of smell. It is an emerging application of modern engineering where robots or other automated systems are needed to measure the existence of a particular chemical concentration in air. Such an apparatus is often called an electronic nose or e-nose. Machine olfaction is complicated by the fact that e-nose devices to date have had a limited number of elements, whereas each odor is produced by own unique set of (potentially numerous) odorant compounds. because[1] This technology is still in the early stages of development, but it promises many applications, such as:[2] Pattern analysis constitutes a critical building block in the development of gas sensor array instruments capable of detecting, identifying, and measuring volatile compounds, a technology that has been proposed as an artificial substitute for the human olfactory system.

Detection[edit] There are three basic detection techniques using: See also[edit] References[edit] L'application qui scanne les aliments et vous dit tout ce que vous mangez. Les étals du marché vous font saliver ? Mais les fruits, les légumes, les oeufs et les poissons présentés sont-ils aussi frais et sains que le marchand vous le dit ? Tellspec, votre nouveau compagnon shopping, répondra à la question avant même que le vendeur n’ait pu vous vanter les mérites de ses produits. Mis au point par Stephen Watson et Isabel Hoffman, l’objet fonctionne comme un Shazam alimentaire : il suffit de le pointer vers un aliment ou un plat, d’appuyer sur un bouton et d’attendre que sa sonnerie indique les résultats de l’analyse. Présence de gluten, ou de résidus d’arachides, présence de produits chimiques, composition vitaminique et balance calorique : tout ce qui rentre dans la composition de ce que vous vous apprêtez à avaler est détaillé sur l’écran de votre téléphone.

[Initiative détectée par Julie Rivoire, éclaireuse de Soon Soon Soon-> Industrial SENSORS and Process CONTROLS exhibition by EXPO21XX. MARF -- Modular Audio Recognition Framework and its Applications for Speech, Voice, and NLP Processing. Leap Motion | Mac & PC Motion Controller for Games, Design, & More. Modular Audio Recognition Framework. See also[edit] List of natural language processing toolkits References[edit] "Modular Audio Recognition Framework". MARF, The Modular Audio Recognition Framework, and its Applications. Retrieved 2007-08-10. [edit] Computer Vision. Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.[1][2][3][4] A theme in the development of this field has been to duplicate the abilities of human vision by electronically perceiving and understanding an image.[5] This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.[6] Computer vision has also been described as the enterprise of automating and integrating a wide range of processes and representations for vision perception.[7] As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images.

Related fields[edit] Applications for computer vision[edit] Recognition[edit] Crowd Vision home. Catégorie:Vision artificielle. Speech recognition. Speech recognition is usually processed in middleware, the results are transmitted to the user applications. In Computer Science and Electrical Engineering speech recognition (SR) is the translation of spoken words into text. It is also known as "automatic speech recognition" (ASR), "computer speech recognition", or just "speech to text" (STT). Some SR systems use "speaker independent speech recognition"[1] while others use "training" where an individual speaker reads sections of text into the SR system.

These systems analyze the person's specific voice and use it to fine tune the recognition of that person's speech, resulting in more accurate transcription. Systems that do not use training are called "speaker independent" systems. Speech recognition applications include voice user interfaces such as voice dialling (e.g. The term voice recognition[2][3][4] or speaker identification[5][6] refers to finding the identity of "who" is speaking, rather than what they are saying. Military[edit] A Speech-Recognition Program Called Scribe Could Use Human Labor to Improve the Work of Automated Services Like Siri or Dragon. Computer scientist Jeffrey Bigham has created a speech-recognition program that combines the best talents of machines and people.

Though voice recognition programs like Apple’s Siri and Nuance’s Dragon are quite good at hearing familiar voices and clearly dictated words, the technology still can’t reliably caption events that present new speakers, accents, phrases, and background noises. People are pretty good at understanding words in such situations, but most of us aren’t fast enough to transcribe the text in real time (that’s why professional stenographers can charge more than $100 an hour). So Bigham’s program Scribe augments fast computers with accurate humans in hopes of churning out captions and transcripts quickly.

This rapid-fire crowd-computing experiment could be a big help for deaf and hearing-impaired people. It also could also provide new ways to enhance voice recognition applications like Siri in areas where they struggle. With Emotion Recognition Algorithms, Computers Know What You’re Thinking. Back when Google was first getting started, there were plenty of skeptics who didn’t think a list of links could ever turn a profit. That was before advertising came along and gave Google a way to pay its bills — and then some, as it turned out. Thanks in part to that fortuitous accident, in today’s Internet market, advertising isn’t just an also-ran with new technologies: Marketers are bending innovation to their needs as startups chase prospective revenue streams.

A handful of companies are developing algorithms that can read the human emotions behind nuanced and fleeting facial expressions to maximize advertising and market research campaigns. Major corporations including Procter & Gamble, PepsiCo, Unilever, Nokia and eBay have already used the services. They’ve all developed the ability to identify emotions by taking massive data sets — videos of people reacting to content — and putting them through machine learning systems.

Here’s how the systems work. A smart-object recognition algorithm that doesn’t need humans. (Credit: BYU Photo) BYU engineer Dah-Jye Lee has created an algorithm that can accurately identify objects in images or video sequences — without human calibration. “In most cases, people are in charge of deciding what features to focus on and they then write the algorithm based off that,” said Lee, a professor of electrical and computer engineering.

“With our algorithm, we give it a set of images and let the computer decide which features are important.” Humans need not apply Not only is Lee’s genetic algorithm able to set its own parameters, but it also doesn’t need to be reset each time a new object is to be recognized — it learns them on its own. Lee likens the idea to teaching a child the difference between dogs and cats. Comparison with other object-recognition algorithms In a study published in the December issue of academic journal Pattern Recognition, Lee and his students demonstrate both the independent ability and accuracy of their “ECO features” genetic algorithm. Free Online OCR - convert scanned PDF and images to Word, JPEG to Word.

Optical character recognition. Optical Character Recognition, usually abbreviated to OCR, is the mechanical or electronic conversion of scanned or photographed images of typewritten or printed text into machine-encoded/computer-readable text. It is widely used as a form of data entry from some sort of original paper data source, whether passport documents, invoices, bank statement, receipts, business card, mail, or any number of printed records. It is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed on-line, and used in machine processes such as machine translation, text-to-speech, key data extraction and text mining.

OCR is a field of research in pattern recognition, artificial intelligence and computer vision. Early versions needed to be programmed with images of each character, and worked on one font at a time. "Intelligent" systems with a high degree of recognition accuracy for most fonts are now common. History[edit] Types[edit] Comparison of optical character recognition software. This comparison of optical character recognition software includes: OCR engines, that do the actual character identificationLayout analysis software, that divide scanned documents into zones suitable for OCRGraphical interfaces to one or more OCR enginesSoftware development kits that are used to add OCR capabilities to other software (e.g. forms processing applications, document imaging management systems, e-discovery systems, records management solutions)

Tesseract-ocr - An OCR Engine that was developed at HP Labs between 1985 and 1995... and now at Google. About Optical Character Recognition in Google Drive - Drive Help. Optical Character Recognition in a nutshell Optical Character Recognition (OCR) lets you convert images with text into text documents using automated computer algorithms.

Images can be processed individually (.jpg, .png, and .gif files) or in multi-page PDF documents (.pdf). These are some of the types of files suitable for OCR: Image or PDF files obtained using flatbed scanners Photos taken with digital cameras or mobile phones Using OCR in Google Drive In Google Drive, we take your uploaded images or PDF files, scan the file, and use computer algorithms to convert the file into a Google document.

For best results, the image or PDF files need to meet certain requirements: Resolution: High-resolution files work best. File size limitations The maximum size for images (.jpg, .gif, .png) and PDF files (.pdf) is 2 MB. Preservation of text formatting When processing your document, we attempt to preserve basic text formatting such as bold and italic text, font size and type, and line breaks. List of sensors. This is a list of sensors sorted by sensor type. Acoustic, sound, vibration[edit] Automotive, transportation[edit] Chemical[edit] Electric current, electric potential, magnetic, radio[edit] Environment, weather, moisture, humidity[edit] Flow, fluid velocity[edit] Ionizing radiation, subatomic particles[edit] [edit] Position, angle, displacement, distance, speed, acceleration[edit] Optical, light, imaging, photon[edit] Pressure[edit] Force, density, level[edit] Thermal, heat, temperature[edit] Proximity, presence[edit] Sensor technology[edit] Other sensors and sensor related properties and concepts[edit] References[edit]

Category:Sensors.