background preloader

Meet the algorithm that can learn “everything about anything”

Meet the algorithm that can learn “everything about anything”
The most recent advances in artificial intelligence research are pretty staggering, thanks in part to the abundance of data available on the web. We’ve covered how deep learning is helping create self-teaching and highly accurate systems for tasks such as sentiment analysis and facial recognition, but there are also models that can solve geometry and algebra problems, predict whether a stack of dishes is likely to fall over and (from the team behind Google’s word2vec) understand entire paragraphs of text. (Hat tip to frequent commenter Oneasum for pointing out all these projects.) One of the more interesting projects is a system called LEVAN, which is short for Learn EVerything about ANything and was created by a group of researchers out of the Allen Institute for Artificial Intelligence and the University of Washington. What that means, essentially, is that LEVAN uses the web to learn everything it needs to know. Related:  ishireign

Twitter Data Analytics Published by Springer Shamanth Kumar, Fred Morstatter, and Huan Liu Data Mining and Machine Learning Lab School of Computing, Informatics, and Decision Systems Engineering Arizona State University Social media has become a major platform for information sharing. How to Cite Artificial Intelligence and Machine Learning A Gaussian Mixture Model Layer Jointly Optimized with Discriminative Features within A Deep Neural Network Architecture Ehsan Variani, Erik McDermott, Georg Heigold ICASSP, IEEE (2015) Adaptation algorithm and theory based on generalized discrepancy Corinna Cortes, Mehryar Mohri, Andrés Muñoz Medina Proceedings of the 21st ACM Conference on Knowledge Discovery and Data Mining (KDD 2015) Adding Third-Party Authentication to Open edX: A Case Study John Cox, Pavel Simakov Proceedings of the Second (2015) ACM Conference on Learning @ Scale, ACM, New York, NY, USA, pp. 277-280 An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections Yu Cheng, Felix X.

The Future of Machine Intelligence Ben Goertzel March 20, 2009 In early March 2009, 100 intellectual adventurers journeyed from various corners of Europe, Asia, America and Australasia to the Crowne Plaza Hotel in Arlington Virginia, to take part in the Second Conference on Artificial General Intelligence, AGI-09: a conference aimed explicitly at the grand goal of the AI field, the creation of thinking machines with general intelligence at the human level and ultimately beyond. While the majority of the crowd hailed from academic institutions, major firms like Google, GE, AT&T and Autodesk were also represented, along with a substantial contingent of entrepreneurs involved with AI startups, and independent researchers. Since I was the chair of the conference and played a large role in its organization – along with a number of extremely competent and passionate colleagues – my opinion must be considered rather subjective ... but, be that as it may, my strong feeling is that the conference was an unqualified success!

Machine Learning (Theory) Face mask detection with ML/AI on Cisco industrial hardware - Cisco Blogs Imagine you’ve been asked to create an architecture that can apply analytics on very voluminous data such as video streams generated from cameras. Given the volume and sensitivity of the data, you don’t want to send it off-premises for analysis. Also, the anticipated cost of centralizing the data might invalidate your business value assessment for your use case. You could apply machine learning (ML) or artificial intelligence (AI) at the edge—but only if you can make it work with the available compute resources. This is the exact challenge I recently tackled with the help of my colleague, Michael Wielpuetz. It’s not always easy or even possible to change or scale the available compute resources in a typical edge scenario. To provide food for thought, to incubate new ideas, and to proof possibility, Michael Wielpuetz and I started in our free time to minimize the resource requirement of an exemplary setup. See how the standard Docker images compare to our base image: Share:

Perspectives on Self-serve Machine Learning for Rapid Insights in Healthcare BigML users keep inspiring us with their creativity every day. Many take on Machine Learning with little to no background or education in the field. Why? Because they have access to relevant data and they are smart professionals with the kind of intuition only years of study in a certain field can bring about. Long time BigML user Dr. Can you please tell us about your background and how you first developed an interest in Machine Learning? Hope you enjoyed this interview and found something useful to directly apply in your projects. Like this: Like Loading...

Rewriting the rules of machine-generated art | MIT News | Massachusetts Institute of Technology Horses don’t normally wear hats, and deep generative models, or GANs, don’t normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws. In a new study appearing at the European Conference on Computer Vision this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before. “GANs are incredible artists, but they’re confined to imitating the data they see,” says the study’s lead author, David Bau, a PhD student at MIT. Generative adversarial networks, or GANs, pit two neural networks against each other to create hyper-realistic images and sounds. But the new study suggests that big datasets are not essential. “We’re like prisoners to our training data,” he says. “It had some rule that seemed to say, ‘doors don’t go there,’” he says.

Le Machine learning et le Deep learning optimisent les données des entreprises De façon invisible, l’Intelligence artificielle et ses sous-catégories (machine learning et deep learning) améliorent les performances d’applications grand public (selfies ou la reconnaissance vocale) et les logiciels professionnels (analyse d’images ou de textes). De nombreuses entreprises constatent des avantages mesurables des déploiements de l’IA selon une étude de McKinsey. Certains professionnels indiquent même une augmentation des revenus supérieure à 10 %. Sous-catégories de l’IA, le machine learning et le deep learning sont très prometteurs. Cette solution est notamment utilisée par EasyJet pour optimiser la vente de produits à bord des avions, surtout les boissons et repas. Un temps de latence De son côté, le deep learning permet d’aller plus loin que le machine learning pour reconnaître des objets complexes comme les images, l’écriture manuscrite, la parole et le langage. « Le ML et le DL sont disséminés dans de multiples applications. Pas de miracles

Qu'est-ce que le Machine Learning ou apprentissage automatique ? Encore confus pour de nombreuses personnes, le Machine Learning est une science moderne permettant de découvrir des répétitions (des patterns) dans un ou plusieurs flux de données et d’en tirer des prédictions en se basant sur des statistiques. En clair, le Machine Learning se base sur le forage de données, permettant la reconnaissance de patterns pour fournir des analyses prédictives. Les premiers algorithmes de Machine Learning ne datent pas d’hier, puisque certains ont été conçus dès 1950, le plus connu d’entre eux étant le Perceptron. Le Machine Learning révèle tout son potentiel dans les situations où des insights (tendances) doivent être repérés à partir de vastes ensembles de données diverses et variées, appelés le Big Data. Pour analyser de tels volumes de données, le Machine Learning se révèle bien plus efficace en termes de vitesse et de précisions que les autres méthodologies traditionnelles.

Related: