background preloader

Bookmarks

Facebook Twitter

Implementing a CNN for Text Classification in TensorFlow – WildML. The full code is available on Github.

Implementing a CNN for Text Classification in TensorFlow – WildML

In this post we will implement a model similar to Kim Yoon’s Convolutional Neural Networks for Sentence Classification. The model presented in the paper achieves good classification performance across a range of text classification tasks (like Sentiment Analysis) and has since become a standard baseline for new text classification architectures. I’m assuming that you are already familiar with the basics of Convolutional Neural Networks applied to NLP. If not, I recommend to first read over Understanding Convolutional Neural Networks for NLP to get the necessary background. Data and Preprocessing The dataset we’ll use in this post is the Movie Review data from Rotten Tomatoes – one of the data sets also used in the original paper. Understanding Convolutional Neural Networks for NLP – WildML. When we hear about Convolutional Neural Network (CNNs), we typically think of Computer Vision.

Understanding Convolutional Neural Networks for NLP – WildML

CNNs were responsible for major breakthroughs in Image Classification and are the core of most Computer Vision systems today, from Facebook’s automated photo tagging to self-driving cars. More recently we’ve also started to apply CNNs to problems in Natural Language Processing and gotten some interesting results. In this post I’ll try to summarize what CNNs are, and how they’re used in NLP. The intuitions behind CNNs are somewhat easier to understand for the Computer Vision use case, so I’ll start there, and then slowly move towards NLP. Recurrent Neural Networks Tutorial, Part 1 – Introduction to RNNs – WildML.

Recurrent Neural Networks (RNNs) are popular models that have shown great promise in many NLP tasks.

Recurrent Neural Networks Tutorial, Part 1 – Introduction to RNNs – WildML

But despite their recent popularity I’ve only found a limited number of resources that throughly explain how RNNs work, and how to implement them. Hacker's guide to Neural Networks. Hi there, I’m a CS PhD student at Stanford.

Hacker's guide to Neural Networks

I’ve worked on Deep Learning for a few years as part of my research and among several of my related pet projects is ConvNetJS - a Javascript library for training Neural Networks. Javascript allows one to nicely visualize what’s going on and to play around with the various hyperparameter settings, but I still regularly hear from people who ask for a more thorough treatment of the topic.

This article (which I plan to slowly expand out to lengths of a few book chapters) is my humble attempt. It’s on web instead of PDF because all books should be, and eventually it will hopefully include animations/demos etc. My personal experience with Neural Networks is that everything became much clearer when I started ignoring full-page, dense derivations of backpropagation equations and just started writing code.

“…everything became much clearer when I started writing code.” Chapter 1: Real-valued Circuits Base Case: Single Gate in the Circuit f(x,y)=xy. Dnngraph by ajtulloch. It consists of several parts: A DSL for specifying the model.

dnngraph by ajtulloch

This uses the lens library for elegant, composable constructions, and the fgl graph library for specifying the network layout. A set of optimization passes that run over the graph representation to improve the performance of the model. For example, we can take advantage of the fact that several layers types (ReLU, Dropout) can operate in-place. Home - colah's blog. A path through a NOSQL Summer Reading. Michael Nielsen. In September of 2012, a team of scientists released a photograph showing the most distant parts of the Universe ever seen by any human being.

Michael Nielsen

They obtained the photograph by pointing the Hubble Space Telescope at a single tiny patch of sky, gradually building up an image over a total of 23 days of observation. It’s an otherwise undistinguished patch of sky, within the little-known constellation Fornax. It’s less than one hundredth the size of the full moon, and appears totally empty to the naked eye. Here’s what’s seen with the Hubble Telescope: This image is known as the Hubble Extreme Deep Field. One of the many striking things about the Hubble Extreme Deep Field is that it’s beautiful. It’s not a typical action-packed online video. Water in Suspense reveals a hidden world. Although I’m not an artist or an art critic, I find Super-realist art fascinating. DDI. On my first day of physics graduate school, the professor in my class on electromagnetism began by stepping to the board, and wordlessly writing four equations: He stepped back, turned around, and said something like [1]: “These are Maxwell’s equations.

DDI

Just four compact equations. With a little work it’s easy to understand the basic elements of the equations – what all the symbols mean, how we can compute all the relevant quantities, and so on. Bayesian Methods for Hackers. An intro to Bayesian methods and probabilistic programming from a computation/understanding-first, mathematics-second point of view.

Bayesian Methods for Hackers

Prologue The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. This can leave the user with a so-what feeling about Bayesian inference. After some recent success of Bayesian methods in machine-learning competitions, I decided to investigate the subject again.

If Bayesian inference is the destination, then mathematical analysis is a particular path towards it. Neural networks and deep learning. Cam Davidson Pilon.