background preloader

AI & Machine-learning

Facebook Twitter

Your smartphone’s AI algorithms could tell if you are depressed. Depression is a huge problem for millions of people, and it is often compounded by poor mental-health support and stigma. Early diagnosis can help, but many mental disorders are difficult to detect. The machine-learning algorithms that let smartphones identify faces or respond to our voices could help provide a universal and low-cost way of spotting the early signs and getting treatment where it’s needed. In a study carried out by a team at Stanford University, scientists found that face and speech software can identify signals of depression with reasonable accuracy. The researchers fed video footage of depressed and non-depressed people into a machine-learning model that was trained to learn from a combination of signals: facial expressions, voice tone, and spoken words.

The data was collected from interviews in which a patient spoke to an avatar controlled by a physician. In testing, it was able to detect whether someone was depressed more than 80% of the time. If Google Assistant or Siri aren't smart enough for you, you can build your own AI. Google CEO Sundar Pichai believes that we are moving to an "AI-first" world. In this world, we will be interacting with personal digital assistants on a range of platforms, including through Google's new intelligent speaker "Google Home" and other Google-powered devices.

Google's latest personal digital assistant Google Assistant joins a group of similar technologies from Apple, Amazon and Microsoft. Apple's Siri has been around for nearly 5 years. In that time, Siri has developed to do more things and on a wider range of platforms. Siri is now available on all of Apple's platforms. After 5 years, is Siri any smarter? Unfortunately, Siri hasn't actually got that much smarter.

Google's new assistant has arguably improved on Siri in terms of its ability to deal with queries that interact with search generally and software like maps. To take a very simple example, asking Google Assistant "What will the temperature be tomorrow? " We still aren't talking to Siri Build your own digital assistant. There's a way to turn almost any object into a computer – and it could cause shockwaves in AI. The latest chip in the iPhone 7 has 3.3 billion transistors packed into a piece of silicon around the size of a small coin. But the trend for smaller, increasingly powerful computers could be coming to an end.

Silicon-based chips are rapidly reaching a point at which the laws of physics prevent them being any smaller. There are also some important limitations to what silicon-based devices can do that mean there is a strong argument for looking at other ways to power computers. Perhaps the most well-known alternative researchers are looking at is quantum computers, which manipulate the properties of the chips in a different way to traditional digital machines.

But there is also the possibilty of using alternative materials – potentially any material or physical system – as computers to perform calculations, without the need to manipulate electrons like silicon chips do. And it turns out these could be even better for developing artificial intelligence than existing computers. First demonstration of brain-inspired device to power artificial systems. New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. Artificial neural networks (ANNs) exhibit learning abilities and can perform tasks which are difficult for conventional computing systems, such as pattern recognition, on-line learning and classification. Practical ANN implementations are currently hampered by the lack of efficient hardware synapses; a key component that every ANN requires in large numbers.

In the study, published in Nature Communications, the Southampton research team experimentally demonstrated an ANN that used memristor synapses supporting sophisticated learning rules in order to carry out reversible learning of noisy input data. Acting like synapses in the brain, the metal-oxide memristor array was capable of learning and re-learning input patterns in an unsupervised manner within a probabilistic winner-take-all (WTA) network. RoboVote helps groups make decisions using AI-driven methods.

A contentious presidential election can raise questions about whether the voting system produces the best possible candidates. While nothing is going to change the way Americans vote, a new online service, RoboVote.org, enables anyone to use state-of-the-art voting methods to make optimal group decisions. RoboVote, a project of researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Carnegie Mellon University, doesn't just tabulate votes, as any number of online survey tools already do. Rather, the site is driven by artificial intelligence and draws on decades, if not centuries, of social choice research on how opinions, preferences and interests can best be combined to reach a collective decision.

"We're leveraging the latest work in optimization and AI to help people make decisions in their daily lives," said Ariel Proccacia, assistant professor of computer science at Carnegie Mellon. AI researchers to see if they can push some boundaries with StarCraft II. (Tech Xplore)—Google's artificial intelligence group DeepMind is teaming up with the makers of the StarCraft video game.

Scientists working on artificial intelligence systems will possibly thrive from this very challenging playground. "For almost 20 years, the StarCraft game series has been widely recognized as the pinnacle of 1v1 competitive video games, and among the best PC games of all time," said Oriol Vinyals, research scientist, in the DeepMind blog. The blog announced Friday that StarCraft II will be released as an AI research environment.

DeepMind and Blizzard are the two parties behind this release. Jane Wakefield, BBC, described the game: "StarCraft II, made by developer Blizzard, is a real-time strategy game in which players control one of three warring factions - humans, the insect-like Zerg, or aliens known as the Protoss. Players' actions are governed by the in-game economy, and minerals and gas must be gathered in order to produce new buildings and units. Share Video. Understanding the four types of AI, from reactive robots to self-aware beings. The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do.

How much longer can it be before they walk among us? The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely won't see machines "exhibit broadly-applicable intelligence comparable to or exceeding that of humans," though it does go on to say that in the coming years, "machines will reach and exceed human performance on more and more tasks. " But its assumptions about how those capabilities will develop missed some important points. As an AI researcher, I'll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call "the boring kind of AI. " Cow goes moo: Artificial intelligence-based system associates images with sounds.

The cow goes "moo. " The pig goes "oink. " A child can learn from a picture book to associate images with sounds, but building a computer vision system that can train itself isn't as simple. Using artificial intelligence techniques, however, researchers at Disney Research and ETH Zurich have designed a system that can automatically learn the association between images and the sounds they could plausibly make. Given a picture of a car, for instance, their system can automatically return the sound of a car engine.

A system that knows the sound of a car, a splintering dish, or a slamming door might be used in a number of applications, such as adding sound effects to films, or giving audio feedback to people with visual disabilities, noted Jean-Charles Bazin, associate research scientist at Disney Research. To solve this challenging task, the research team leveraged data from collections of videos. Computer 'brains' solving mysteries of human behaviour. NTechLab focusing on AI facial recognition capabilities. (Tech Xplore)—How far have technology experts gone in achieving software for facial recognition? Moscow-based NTechLab is a group that focuses on AI intelligence algorithms, and they have gone far. The company is made up of a team of experts in machine learning and deep learning. They have been at work on a facial recognition mission which has attracted great interest; their tool is effective but it also raises some concerns about privacy, if the tool were abused.

Their algorithm can extract facial feature characteristics, which has become a hot topic. Luke Dormehl, UK-based tech writer at Digital Trends said in June that NTechLab may have stumbled upon one of the best facial recognition systems around. NTechLab was founded last year by Artem Kuharenko. They use techniques in artificial neural networks and machine learning to develop software products. "A face recognition system already developed by our lab, has proved to be among the most accurate ones throughout the world," they stated. Technique reveals the basis for machine-learning systems' decisions. In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications.

A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts. But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it's sometimes possible to automate experiments that determine which visual features a neural net is responding to.

But text-processing systems tend to be more opaque. "In real-world applications, sometimes people really want to know why the model makes the predictions it does," says Tao Lei, an MIT graduate student in electrical engineering and computer science and first author on the new paper. "One major reason that doctors don't trust machine-learning methods is that there's no evidence. " How machine learning advances artificial intelligence.

Researcher uses internet robot to investigate creativity. Tom White, senior lecturer in Victoria's School of Design, has created Smilevector—a bot that examines images of people, then adds or removes smiles to their faces. "It has examined hundreds of thousands of faces to learn the difference between images, by finding relations and reapplying them," says Mr White. "When the computer finds an image it looks to identify if the person is smiling or not. If there isn't a smile, it adds one, but if there is a smile then it takes it away. "It represents these changes as an animation, which moves parts of the face around, including crinkling and widening the eyes. " The bot can be used as a form of puppetry, says Mr White. "These systems are domain independent, meaning you can do it with anything—from manipulating images of faces to shoes to chairs. The creation of the bot was sparked by Mr White's research into creative intelligence.

"Machine learning and artificial intelligence are starting to have implications for people in creative industries. DeepMind researchers boost AI learning speed with UNREAL agent. Preserving variety in subsets of unmanageably large data sets to aid machine learning. When data sets get too big, sometimes the only way to do anything useful with them is to extract much smaller subsets and analyze those instead. Those subsets have to preserve certain properties of the full sets, however, and one property that's useful in a wide range of applications is diversity. If, for instance, you're using your data to train a machine-learning system, you want to make sure that the subset you select represents the full range of cases that the system will have to confront. Last week at the Conference on Neural Information Processing Systems, researchers from MIT's Computer Science and Artificial Intelligence Laboratory and its Laboratory for Information and Decision Systems presented a new algorithm that makes the selection of diverse subsets much more practical.

"The other application where we actually use this thing is in large-scale learning. Thinking small The MIT researchers' algorithm begins, instead, with a small subset of the data, chosen at random. SAMIM. Gene Kogan. IXDS | Pre-Work Talk Berlin. At this Berlin Pre-Work Talk, we have the honor of hearing from two great speakers who are pushing the boundaries in their explorations of machine learning, and will open our eyes to new ways of applying it to design and creative fields. Visiting Berlin from New York for only a few weeks, Gene Kogan is an artist and programmer who is interested in generative systems, emerging technology and artificial intelligence. In his talk he’ll broadly present recent advancements in the field of machine learning, focusing on applications to art, design, and other creative disciplines.

He’ll share recent works and easily explain their underlying science using a selection of interactive demos and educational resources from ml4a (the free book he’s developing about machine learning for artists). Gene Kogan Gene is an artist and programmer who is interested in generative systems, emerging technology, and artificial intelligence. He writes code for live music, performance, and visual art. Samim A.

Machine Learning for Artists.