background preloader


Facebook Twitter

This Amazing App Sees Objects For Blind People. Standing with my eyes closed in the bathroom, I aim my phone in the air. It vibrates, more and more, until it’s buzzing with excitement. "Toilet," it announces in a female robot voice. "Shower," it adds a few moments later. My phone is seeing for me, and it’s scary-remarkable. Like the first time Dragon Dictate understood my speech, or Facebook picked my head out of a crowd. I’m using a free Android app called BlindTool. "A few years ago I worked with a blind programmer, and it really drew my attention to the needs to visually impaired people," Cohen says. Today, our computers have become absurdly good at identifying objects.

BlindTool, on the other hand, fits on a smartphone and runs as a completely self-contained app. As Cohen explains, this compromise allows BlindTool to run fast. However, in turn, BlindTool can be wrong. "There are a lot of specific things it’s trained on, like dog breeds, but Christmas tree is not on there. Indeed. [via Prosthetic Knowledge] The best Twitter bots of 2015. Every year around this time, Americans engage in a cultural battle. Some of us lament that the true meaning of Christmas has been subsumed by consumerism. Others rail that commercial products aren’t Christmas-y enough. That Starbucks’ holiday-themed cups didn’t bother to say “Merry Christmas” was enough to inspire viral internet outrage. These plaintiffs either fear that religious diversity and political correctness, or that unchecked capitalism, have corrupted the pure, Christian roots of the holiday.

But both sides are wrong. The Christmas holiday has always been less about celebrating the birth of Jesus than about commerce, religious dominance, civic power, and heavy drinking. For centuries, in Europe and stateside, the Christmas spirit was actually about imbibing spirits. In fact, Christmas as we know it today is only a couple of centuries old, a byproduct of the industrial revolution and shifting social classes. Don’t be disgusted by that. December is a fine time for a party True. Home - Toronto Deep Learning Demos. Home - Toronto Deep Learning Demos. Fastcoexist. Microsoft’s new tech demo can look at a photograph of a face and tell you the emotions it shows. You can upload your own photos, and it will scan them, detect the faces, and tell you what your friends were really feeling when you tripped the camera shutter. Microsoft’s Project Oxford photo research division has come up with all kinds of startling tools in the past, from an app that could tell your age, to the amazing Photosynth, which can grab your vacation photos and arrange them to make a 3-D model of the places you visited.

But emotion detection is a whole—perhaps spooky—new level. The system uses machine learning to process existing images, and builds its predictions on what it sees. And as it is fed more and more images, its suggestions improve. After processing, a face is broken down into its emotional constituents, with a score card presented for each person recognized in the photograph.

The tech might also be deployed to process video in CCTV footage. The Treasury, Petra. Description No description has been added for this panorama Stats Location. TV networks seize the second screen with shoppable content. TV executives know that when viewers are watching, they’re also scrolling on their smartphones: 87 percent of consumers use a second screen as they flip channels, according to an Accenture report. For Bravo Media, whose viewership is 66 percent female and 55 percent aged 25 and 54, the solution to engaging second-screen content was making its series shoppable. So it launched The Lookbook this week, a microsite with fashion and beauty content tied to the looks in its series, “The Girlfriend’s Guide to Divorce.” “We get a lot of inquiries asking about the fashion and styles of what they see on air,” said Aimee Viles, senior vp of emerging media at Bravo parent NBC. “So we saw a great opportunity, and with how fast mobile is growing, now was the right time to build this destination.”

Advertisement While viewers are watching an episode, the show will prompt them to check out the Lookbook on sees about 6.1 million monthly uniques, a 57 percent year-over-year increase. In a Big Network of Computers, Evidence of Machine Learning. Photo MOUNTAIN VIEW, Calif. — Inside ’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own. Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats. The neural network taught itself to recognize cats, which is actually no frivolous activity. The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers.

And then, of course, there are the cats. To find them, the Google research team, led by the computer scientist Andrew Y. Researchers Announce Advance in Image-Recognition Software. MOUNTAIN VIEW, Calif. — Two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding. Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at , teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain.

The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate. The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. Dr. Neural network takes a stroll, describes what it sees in real time | The Daily Buzz. An American artist and coder decided to test an image recognition system, known as a neural network, to annotate a walk through Amsterdam in real time. Naturally, he launched the results online. Using a tweaked program built by researchers from Stanford and Google, Kyle McDonald set out for a meander, which he filmed live from the webcam of his laptop.

His computer analyzed the footage it captured in real time, using a text scroll on the upper left hand side of the screen. McDonald used an open source program called NeuralTalk, which was introduced last year. While much of the footage is accurate (a boat is sitting on the water near a dock), it often makes mistakes. Slack.