background preloader

Machine Learning (Theory)

Machine Learning (Theory)
This post is a (near) transcript of a talk that I gave at the ICML 2013 Workshop on Peer Review and Publishing Models. Although there’s a PDF available on my website, I’ve chosen to post a slightly modified version here as well in order to better facilitate discussion. Disclaimers and Context I want to start with a couple of disclaimers and some context. First, I want to point out that although I’ve read a lot about double-blind review, this isn’t my research area and the research discussed in this post is not my own. As a result, I probably can’t answer super detailed questions about these studies.

http://hunch.net/

Related:  uutmaFABBING

Artificial Intelligence and Machine Learning A Gaussian Mixture Model Layer Jointly Optimized with Discriminative Features within A Deep Neural Network Architecture Ehsan Variani, Erik McDermott, Georg Heigold ICASSP, IEEE (2015) Adaptation algorithm and theory based on generalized discrepancy Corinna Cortes, Mehryar Mohri, Andrés Muñoz Medina Proceedings of the 21st ACM Conference on Knowledge Discovery and Data Mining (KDD 2015) Adding Third-Party Authentication to Open edX: A Case Study John Cox, Pavel Simakov Proceedings of the Second (2015) ACM Conference on Learning @ Scale, ACM, New York, NY, USA, pp. 277-280 An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections Yu Cheng, Felix X.

A new digital ecology is evolving, and humans are being left behind This is an excellent point. You mean something similar to Valve's fee on steam's marketplace? They take 10% cut out of every transaction, no matter how big or small. Unfortunately, it (the Tobin tax mentioned in the article ) is being resisted by some very powerful people. Not really, that's more akin to the capital gains tax already in place. As Machines Get Smarter, Evidence They Learn Like Us The brain performs its canonical task — learning — by tweaking its myriad connections according to a secret set of rules. To unlock these secrets, scientists 30 years ago began developing computer models that try to replicate the learning process. Now, a growing number of experiments are revealing that these models behave strikingly similar to actual brains when performing certain tasks. Researchers say the similarities suggest a basic correspondence between the brains’ and computers’ underlying learning algorithms. The algorithm used by a computer model called the Boltzmann machine, invented by Geoffrey Hinton and Terry Sejnowski in 1983, appears particularly promising as a simple theoretical explanation of a number of brain processes, including development, memory formation, object and sound recognition, and the sleep-wake cycle. Multilayer neural networks consist of layers of artificial neurons with weighted connections between them.

QUANTICOL QUANTICOL is a European research initiative involving the University of Edinburgh, Scotland; Istituto di Scienza e Tecnologie della Informazione “A. Faedo”, Italy; Ludwig-Maximilians-Universität München, Germany; Ecole Polytechnique Fédérale de Lausanne, Switzerland; IMT Lucca, Italy and the University of Southampton. The QUANTICOL project is a member of Fundamentals of Collective Adaptive Systems (FOCAS), a Future and Emerging Technologies Proactive Initiative funded by the European Commission under FP7. The main objective of the QUANTICOL project is the development of an innovative formal design framework that provides a specification language for collective adaptive systems (CAS) and a large variety of tool-supported, scalable analysis and verification techniques.

The Future of Machine Intelligence Ben Goertzel March 20, 2009 In early March 2009, 100 intellectual adventurers journeyed from various corners of Europe, Asia, America and Australasia to the Crowne Plaza Hotel in Arlington Virginia, to take part in the Second Conference on Artificial General Intelligence, AGI-09: a conference aimed explicitly at the grand goal of the AI field, the creation of thinking machines with general intelligence at the human level and ultimately beyond. While the majority of the crowd hailed from academic institutions, major firms like Google, GE, AT&T and Autodesk were also represented, along with a substantial contingent of entrepreneurs involved with AI startups, and independent researchers. Since I was the chair of the conference and played a large role in its organization – along with a number of extremely competent and passionate colleagues – my opinion must be considered rather subjective ... but, be that as it may, my strong feeling is that the conference was an unqualified success!

UN climate chief says the science is clear: there is no space for new coal The UN climate chief, Christiana Figueres, has said there was “no space” for new coal developments and stressed the benefits of ambitious renewable energy targets after a meeting with representatives from seven Australian governments. At the meeting in Adelaide, organised by the South Australian government, federal, state and territory administrations agreed to work more closely to drive an uptake in renewable energy, coordinate energy-efficiency schemes and help communities adapt to climate change. Figueres, the executive secretary of the United Nations framework convention on climate change, urged the states and territories to work with the federal government to help deliver a “strong” global agreement at key climate talks in Paris in December.

Baidu says its massive deep-learning system is nearly complete Chinese search engine company Baidu is working on a massive computing cluster for deep learning that will be 100 times larger than the cat-recognizing system Google famously built in 2012 and that should be complete in six months, Baidu Chief Scientist and machine leaning expert Andrew Ng told Bloomberg News in an article published on Wednesday. The size Ng is referring to is in terms of neural connections, not sheer server or node count, and will be accomplished via heavy use of graphics processing units, or GPUs. That Baidu is at work on such a system is hardly surprising: Ng actually helped build that system at Google (as part of a project dubbed Google Brain) and has been one of the leading voices in the deep learning community for years. He joined Baidu in May, working out of the company’s Silicon Valley office, in order to help advance its capabilities in artificial intelligence. Andrew Ng shows off some of Baidu’s deep learning applications during a July robotics conference.

Google preps wave of machine learning apps High performance access to file storage Google is preparing to unleash a wave of apps that get intelligence from its mammoth machine learning models. The apps will all rely on the neural networks Google has been developing internally to allow its systems to automatically classify information that has traditionally been tough for computers to parse. The Python Tutorial Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python’s elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. How to Train Your Brain to Stay Focused As an entrepreneur, you have a lot on your plate. Staying focused can be tough with a constant stream of employees, clients, emails, and phone calls demanding your attention. Amid the noise, understanding your brain’s limitations and working around them can improve your focus and increase your productivity.

Fixed vs. Growth: The Two Basic Mindsets That Shape Our Lives “If you imagine less, less will be what you undoubtedly deserve,” Debbie Millman counseled in one of the best commencement speeches ever given, urging: “Do what you love, and don’t stop until you get what you love. Work as hard as you can, imagine immensities…” Far from Pollyanna platitude, this advice actually reflects what modern psychology knows about how belief systems about our own abilities and potential fuel our behavior and predict our success. Much of that understanding stems from the work of Stanford psychologist Carol Dweck, synthesized in her remarkably insightful Mindset: The New Psychology of Success (public library) — an inquiry into the power of our beliefs, both conscious and unconscious, and how changing even the simplest of them can have profound impact on nearly every aspect of our lives. One of the most basic beliefs we carry about ourselves, Dweck found in her research, has to do with how we view and inhabit what we consider to be our personality.

Gandhi's 10 Rules for Changing the World, by Henrik Edberg “You must not lose faith in humanity. Humanity is an ocean; if a few drops of the ocean are dirty, the ocean does not become dirty.” “The difference between what we do and what we are capable of doing would suffice to solve most of the world’s problem.” “If I had no sense of humor, I would long ago have committed suicide.”

Related: