background preloader

The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe

The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. But there is a problem. Today that changes thanks to the work of Henry Lin at Harvard University and Max Tegmark at MIT. First, let’s set up the problem using the example of classifying a megabit grayscale image to determine whether it shows a cat or a dog. Such an image consists of a million pixels that can each take one of 256 grayscale values. In the language of mathematics, neural networks work by approximating complex mathematical functions with simpler ones. Now Lin and Tegmark say they’ve worked out why. Related:  ArtificialIntelligence

The Case of Mistaken Identity – A Data Detective Analysis - insightfulaccountant.com For those of you who have read Insightful Accountant over the past years you will remember my ‘Data Detective’ stories which followed the pattern of a Sherlock Holmes style mystery. This article takes a different approach by looking at the ‘detecting’ aspects of the problem rather than ‘the story.’ Instead of starting with our own mystery, we begin today by looking at one of Agatha Christy’s stories brought to film one more time in 2017. I recently watched the re-make of Murder on the Orient Express1, the one in which Kenneth Branagh plays the lead character. This was certainly not the only case of mistaken identity in this detective story as the majority of the train passengers had in one way or another masked their monikers. Once Poirot had figured out the true identities of all the passengers it made it quite simple to conclude, with the other available evidence, that the murder was indeed committed by 11 of the passengers plus the train’s Conductor.

Machines That Learn Language More Like Kids Do Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence. In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. But gathering the annotation data can be time-consuming and difficult for less common languages. This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. Visual learner The new parser is the first to be trained using video, Ross says.

Java: The Legend Get the free ebook The road from Java's first public alpha of 1.0 to today has been long—and full of technical advances, innovative solutions, and interesting complications. Along the way, Java has flourished and is now one of the world's most important and widely-used programming environments. Benjamin Evans, the Java editor for InfoQ and author of Java in a Nutshell, 6th edition, takes us on a journey through time: How Java has benefitted from early design decisions, including "Write Once, Run Anywhere" and an insistence on backward compatibilityThe impact of open sourceThe enormous success and continued importance of the Java Virtual Machine and platformThe rise of Enterprise JavaThe evolution of the Java developer community and ecosystemJava's continuing influence on new programming languagesJava's greatest triumphs and most heroic failuresThe future of Java, including Java 9, Project Panama, Project Valhalla, and the Internet of Things Ben Evans

Le Machine Learning expliqué avec du chocolat – N26 Magazine - Édition française Une courte intro Le Machine Learning (apprentissage automatique) est une discipline à part entière. Grâce à des algorithmes, les ordinateurs peuvent analyser des quantités de données très importantes et apprendre à prédire des comportements, des résultats ou des tendances… Tout cela pour permettre des prises de décisions. Le problème à résoudre Dans les bureaux de N26, il n’y a pas seulement des corbeilles de fruits frais. Les employés peuvent s’accorder quelques friandises, mises à disposition dans les cuisines situées à chaque étage de nos bureaux berlinois. Le choix est grand : des barres chocolatées au caramel, aux cacahuètes, au goût cookie ou à la noix de coco. La difficulté ici est de prévoir combien de paquets il faudra commander chaque semaine et comment les répartir entre chaque étage pour éviter que les placards restent vides, ou au contraire, qu’ils débordent. La méthode Que vont-ils faire ? Résultat Le jeudi 15 juin 2017, admettons qu’il faisait 20 degrés.

Too Poor To Succeed? WhatsApp Story - Infographic It was year 1992 in Ukraine. It was the worst of times for the economy. Jan Koum was 16. Create an infographic like this on Adioma By 18 Jan knew he wanted to learn to program. Koum’s mother died of cancer in 2000. In 2007 Koum and Acton left Yahoo. Over tea in his Russian friend’s kitchen Koum explained his idea: show status updates next to people’s phone number’s in the address book. A month later, demoing the app to his friends, Koum was taking notes on the bugs and fixes it needed. In hindsight, founding stories like this can seem almost inevitable. Best of arXiv.org for AI, Machine Learning, and Deep Learning – July 2018 In this recurring monthly feature, we filter recent research papers appearing on the arXiv.org preprint server for compelling subjects relating to AI, machine learning and deep learning – from disciplines including statistics, mathematics and computer science – and provide you with a useful “best of” list for the past month. Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. arXiv contains a veritable treasure trove of learning methods you may use one day in the solution of data science problems. We hope to save you some time by picking out articles that represent the most promise for the typical data scientist. The articles listed below represent a fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Especially relevant articles are marked with a “thumbs up” icon.

Paul Ford: What Is Code? | Bloomberg A computer is a clock with benefits. They all work the same, doing second-grade math, one step at a time: Tick, take a number and put it in box one. Tick, take another number, put it in box two. Tick, operate (an operation might be addition or subtraction) on those two numbers and put the resulting number in box one. You, using a pen and paper, can do anything a computer can; you just can’t do those things billions of times per second. Apple has always made computers; Microsoft used to make only software (and occasional accessory hardware, such as mice and keyboards), but now it’s in the hardware business, with Xbox game consoles, Surface tablets, and Lumia phones. So many things are computers, or will be. When you “batch” process a thousand images in Photoshop or sum numbers in Excel, you’re programming, at least a little. You can make computers do wonderful things, but you need to understand their limits. 2.1 How Do You Type an “A”? It’s simple now, right? Ballmer chants “Developers!”

Machine learning will change jobs—impact on economy could surpass that of previous AI applications Machine learning computer systems, which get better with experience, are poised to transform the economy much as steam engines and electricity have in the past. They can outperform people in a number of tasks, though they are unlikely to replace people in all jobs. So say Carnegie Mellon University's Tom Mitchell and MIT's Erik Brynjolfsson in a Policy Forum commentary to be published in the Dec. 22 edition of the journal Science. Mitchell, who founded the world's first Machine Learning Department at CMU, and Brynjolfsson, director of the MIT Initiative on the Digital Economy in the Sloan School of Management, describe 21 criteria to evaluate whether a task or a job is amenable to machine learning (ML). "Although the economic effects of ML are relatively limited today, and we are not facing the imminent 'end of work' as is sometimes proclaimed, the implications for the economy and the workforce going forward are profound," they write.

Related: