background preloader

Neural Networks

Facebook Twitter

GitHub - aymericdamien/TensorFlow-Examples: TensorFlow Tutorial and Examples for beginners. Gradient Descent Derivation · Chris McCormick. Andrew Ng’s course on Machine Learning at Coursera provides an excellent explanation of gradient descent for linear regression.

Gradient Descent Derivation · Chris McCormick

To really get a strong grasp on it, I decided to work through some of the derivations and some simple examples here. This material assumes some familiarity with linear regression, and is primarily intended to provide additional insight into the gradient descent technique, not linear regression in general. I am making use of the same notation as the Coursera course, so it will be most helpful for students of that course. An overview of gradient descent optimization algorithms. Note: If you are looking for a review paper, this blog post is also available as an article on arXiv.

An overview of gradient descent optimization algorithms

Table of contents: Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne's, caffe's, and keras' documentation). Yes you should understand backprop – Andrej Karpathy – Medium. When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level.

Yes you should understand backprop – Andrej Karpathy – Medium

The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards: CS231n Convolutional Neural Networks for Visual Recognition. Open Learning. As a person who does a lot of autonomous learning, the Internet in these days offer a huge amount of possibilities to read/learn about any topic you might think of.

Open Learning

There might be more the problem of filtering out useful/good content from the nearly infinite amount of sources. Inspired by a colleague I will try to give a record of whatever I read/saw and can recommend on specific topics. I will also try to add specific links that I have already studied in the past but may help any interested reader (or myself as lookup). Most stuff will be about machine learning in general and more specific about computer vision/image classification as my master thesis is related to these topics. But from time to time I might add also some more fun related topics. TensorFlow Deep Learning Library. GitHub - TensorBox/TensorBox: Object detection in TensorFlow. Neural networks and deep learning. GitHub - tflearn/tflearn: Deep learning library featuring a higher-level API for TensorFlow. Tensorflow-tutorial/README.md at master · alrojo/tensorflow-tutorial. The Neural Network Zoo - The Asimov Institute.

With new neural network architectures popping up every now and then, it’s hard to keep track of them all.

The Neural Network Zoo - The Asimov Institute

Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) Can be a bit overwhelming at first. So I decided to compose a cheat sheet containing many of those architectures. Most of these are neural networks, some are completely different beasts. Understanding LSTM Networks. Posted on August 27, 2015 Recurrent Neural Networks Humans don’t start their thinking from scratch every second.

Understanding LSTM Networks

As you read this essay, you understand each word based on your understanding of previous words. A Neural Network Playground. Understanding Convolutional Neural Networks for NLP – WildML. When we hear about Convolutional Neural Network (CNNs), we typically think of Computer Vision.

Understanding Convolutional Neural Networks for NLP – WildML

CNNs were responsible for major breakthroughs in Image Classification and are the core of most Computer Vision systems today, from Facebook’s automated photo tagging to self-driving cars. More recently we’ve also started to apply CNNs to problems in Natural Language Processing and gotten some interesting results. In this post I’ll try to summarize what CNNs are, and how they’re used in NLP. The intuitions behind CNNs are somewhat easier to understand for the Computer Vision use case, so I’ll start there, and then slowly move towards NLP.

Image Kernels explained visually. By Victor Powell An image kernel is a small matrix used to apply effects like the ones you might find in Photoshop or Gimp, such as blurring, sharpening, outlining or embossing.

Image Kernels explained visually

They're also used in machine learning for 'feature extraction', a technique for determining the most important portions of an image.