background preloader

Home - colah's blog

Home - colah's blog

http://colah.github.io/

Related:  Neural NetworksMachine LearningData science blogsData ScienceComputer Science

Visualising Activation Functions in Neural Networks - dashee87.github.io In neural networks, activation functions determine the output of a node from a given set of inputs, where non-linear activation functions allow the network to replicate complex non-linear behaviours. As most neural networks are optimised using some form of gradient descent, activation functions need to be differentiable (or at least, almost entirely differentiable- see ReLU). Furthermore, complicated activation functions may produce issues around vanishing and exploding gradients.

Recurrent Neural Networks Tutorial, Part 1 – Introduction to RNNs Recurrent Neural Networks (RNNs) are popular models that have shown great promise in many NLP tasks. But despite their recent popularity I’ve only found a limited number of resources that throughly explain how RNNs work, and how to implement them. That’s what this tutorial is about.

Intro to pandas data structures A while back I claimed I was going to write a couple of posts on translating pandas to SQL. I never followed up. However, the other week a couple of coworkers expressed their interest in learning a bit more about it - this seemed like a good reason to revisit the topic. What follows is a fairly thorough introduction to the library. I chose to break it into three parts as I felt it was too long and daunting as one. Part 1: Intro to pandas data structures, covers the basics of the library's two main data structures - Series and DataFrames.Part 2: Working with DataFrames, dives a bit deeper into the functionality of DataFrames.

Petri Nets World: Online Services for the International Petri Nets Community Recent News in the Petri Nets World January 30, 2018: Model checking contest 2018 - news January 29, 2018: Implementing causal analysis capability (job advert) January 24, 2018: ACSD 2018 - DEADLINE EXTENDED ONCE MORE - February 15, 2018 January 19, 2018: ACSD 2018 - EXTENDED DEADLINE - February 1, 2018 Shriram Krishnamurthi Though my head is often in security, networking, formal methods, and HCI, my heart is in programming languages. Over the years I have contributed to several innovative and useful software systems: JavaScript and Web tools, Flowlog and related tools, Racket (formerly DrScheme), WeScheme, Margrave, Flapjax, FrTime, Continue, FASTLINK, (Per)Mission, and more. For some of what I've been doing lately, please see my research group's blog. Recently, I have decided to devote a substantial portion of my time and energy to the hardest problem I've worked on: computer science education. It's the hardest because it requires substantial work on both technical and human-factors fronts; the audience is often unsophisticated and vulnerable; and if you screw up, you can do real damage to not only individuals but also the field and society. I recently wrote up a manifesto for my new direction [the same text is on both Facebook and Google+].

Implementing a Neural Network from Scratch in Python – An Introduction Get the code: To follow along, all the code is also available as an iPython notebook on Github. In this post we will implement a simple 3-layer neural network from scratch. We won’t derive all the math that’s required, but I will try to give an intuitive explanation of what we are doing. I will also point to resources for you read up on the details. A Beginner's Guide to Recurrent Networks and LSTMs Contents The purpose of this post is to give students of neural networks an intuition about the functioning of recurrent networks and purpose and structure of a prominent variation, LSTMs. Recurrent nets are a type of artificial neural network designed to recognize patterns in sequences of data, such as text, genomes, handwriting, the spoken word, or numerical times series data emanating from sensors, stock markets and government agencies.

DatasFrame This is part one in a multipart series on writing idiomatic pandas code. This post is available as a Jupyter notebook There are many great resources for learning pandas. For beginners, I typically recommend Greg Reda's 3-part introduction, especially if you're familiar with SQL. Data Mining - Entropy (Information Gain) [Gerardnico] Entropy is a function “Information” that satisfies: where: p1p2 is the probability of event 1 and event 2 p1 is the probability of an event 1 p1 is the probability of an event 2 Mathematics - Logarithm Function (log) H stands for entropy E for Ensemble ???

AI memories — expert systems December 3, 2015 This is part of a four post series spanning two blogs. As I mentioned in my quick AI history overview, I was pretty involved with AI vendors in the 1980s.

Related: