background preloader

Conscience et Omniscience

Facebook Twitter

How Do Humans Sketch Objects? Abstract Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73% of the time.

Computational recognition Computational classification results on the test dataset using the best-performing SVM model as described in the paper. Computational Classification Results » t-SNE layouts For each category, we apply dimensionality reduction on the sketch feature space described in the paper (down to two dimensions). T-SNE Layouts » Representative sketches. Sketch-Scanning Software Can Decipher Your Crappy Drawing. Google Puts Its Virtual Brain Technology to Work. This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software”).

That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit. Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind—and the network is said to have learned something.

Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face detection. The neural networks that come out of that process are more flexible. System Automatically Recognizes Baked Goods Without Labels or RFID. July 25th, 2012 by Paul Strauss In the not-too-distant future, technology might let you check out for your purchases without any need to scan tags, enter prices, or even read RFID tags. Thanks to visual recognition technology, items being purchased could be automatically identified just by the way they look.

A trial is underway at a bakery in Tokyo using Brain Corporation’s object recognition technology to automatically ring up items for purchase just by setting them onto a tray. A camera grabs an image of the items, and checks a database to match up the baked goods with their pricing. While I like the general concept, I could see problems with the system if you start dealing with multiple items that look the same on the outside, but have different insides (i.e. different memory configurations on an iPhone, or in this case a cherry croissant vs. a chocolate one.)

[via DigInfo TV] Bicycle Created from Recycled Cardboard for $9. Ordinateurs et smartphones auront bientôt cinq sens, prédit IBM. Grâce à des capteurs détectant la présence de gaz et de certaines molécules dans l’air, un smartphone pourra « sentir » notre haleine et déterminer si nous souffrons par exemple d’un rhume. © IBM Ordinateurs et smartphones auront bientôt cinq sens, prédit IBM - 3 Photos Des ordinateurs et des smartphones dotés des cinq sens humains ? La littérature et les films de science-fiction l’imaginent… Et IBM nous l’annonce pour bientôt, d’ici 5 ans. Pour la septième année consécutive, le géant américain a publié son rapport intitulé IBM 5 in 5 qui recense cinq innovations censées bouleverser notre usage des ordinateurs et des téléphones mobiles dans les 5 ans à venir. Ces prédictions ne sortent pas d’un chapeau mais « des technologies émergentes, qui proviennent des laboratoires de recherche et développement d’IBM du monde entier, qui rendent ces transformations possibles ».

Reproduire le sens du toucher avec un ordinateur Interpréter le sens d’une image Le bon goût Le nez fin Sur le même sujet. Seeing around corners: laser system reconstructs objects hidden from sight. Seeing around corners (credit: R. Ramesh) Researchers from MIT, Harvard University, the University of Wisconsin, and Rice University combined bouncing photons with advanced optics to enable them to “see” what’s hidden around a corner using “time of flight imaging.”

This technique may one day prove invaluable in disaster recovery situations, as well as in noninvasive biomedical imaging applications. “Imagine photons as particles bouncing right off the walls and down a corridor and around a corner — the ones that hit an object are reflected back. When this happens, we can use the data about the time they take to move around and bounce back to get information about geometry,” explains Otkrist Gupta, an MIT graduate student and lead author of the Optics Express paper. The laser beam (red) is split to provide a syncronization signal for the camera (dotted red) and an attenuated reference pulse (orange) to compensate for synchronization drifts and laser intensity fluctiations. Applications. CORNAR: Looking Around Corners - Camera Culture Group, MIT Media Lab. Home | News | Join Us | People | Projects | Publications | Talks | Courses Team Ramesh Raskar, Associate Professor, MIT Media Lab; Project Director (raskar(at)mit.edu)Moungi G.

Bawendi, Professor, Dept of Chemistry, MITAndreas Velten, Postdoctoral Associate, MIT Media Lab; Lead Author (velten(at)mit.edu)Christopher Barsi, Postdoctoral Associate, MIT Media LabEverett Lawson, MIT Media LabNikhil Naik, Research Assistant, MIT Media LabOtkrist Gupta, Research Assistant, MIT Media LabThomas Willwacher, Harvard UniversityAshok Veeraraghavan, Rice UniversityAmy Fritz, MIT Media LabChinmaya Joshi, MIT Media Lab and COE-Pune Current Collaborators:Diego Gutierrez, Universidad de ZaragozaDi Wu, MIT Media Lab and Tsinghua U.Matt O'toole, MIT Media Lab and U. of TorontoBelen Masia, MIT Media Lab and Universidad de ZaragozaKavita Bala, Cornell U.Shuang Zhao, Cornell U. Paper A. Velten, T. Download high resolution photos, videos, related papers and presentations Abstract Earlier Work: Time-Resolved Imaging.

Autonomous MIT Plane Turns a Parking Garage Into Its Own Personal Slalom Course.