background preloader

Machine Learning

Facebook Twitter

How to get into the top 15 of a Kaggle competition using Python. Kaggle competitions are a fantastic way to learn data science and build your portfolio.

How to get into the top 15 of a Kaggle competition using Python

I personally used Kaggle to learn many data science concepts. I started out with Kaggle a few months after learning programming, and later won several competitions. Doing well in a Kaggle competition requires more than just knowing machine learning algorithms. It requires the right mindset, the willingness to learn, and a lot of data exploration. Many of these aspects aren’t typically emphasized in tutorials on getting started with Kaggle, though. At the end, we’ll generate a submission file using the techniques in the this post.

Where this submission would rank as of this writing. The Expedia Kaggle competition The Expedia competition challenges you with predicting what hotel a user will book based on some attributes about the search the user is conducting on Expedia. Titanic: Machine Learning from Disaster. If you're new to data science and machine learning, or looking for a simple intro to the Kaggle competitions platform, this is the best place to start.

Titanic: Machine Learning from Disaster

Continue reading below the competition description to discover a number of tutorials, benchmark models, and more. Competition Description The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships. One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive.

Code Sharing With Kaggle Kernels You can write, run, and view best practice code and visualizations of the Titanic dataset on Kaggle Kernels. Learn data science interactively online. Get started for free. MonkeyLearn integration with Scrapinghub! Crawling the web for huge amounts of data is a hard task.

MonkeyLearn integration with Scrapinghub!

You have to deal with a wide range of problems such as extracting specific content from the sites you’re crawling, retrieving new links to follow, storing the data, avoiding getting blocked, and more. Making sense of all the retrieved data it’s also damn hard. Let’s say that you are scraping product reviews about Samsung Galaxy S7 and Iphone 6s from different retailers. Are these reviews positive or negative?

What do they praise from these smartphones? For the first task, there are great tools like Scrapy, the open source framework for web scraping and crawling. For the second task you’ve tools like MonkeyLearn, a platform that will help you to easily perform text analysis using Machine Learning. Introducing MonkeyLearn Addon for Scrapinghub We are very excited to announce MonkeyLearn integration for Scrapy Cloud. Addon Walkthrough You can access the MonkeyLearn addon through your dashboard within Scrapinghub. And you’re all done! Harthur/brain. HTML5, NodeJS and Neural Networks: The tech behind MySam, an open source Siri.

HTML5, NodeJS and Neural Networks: The tech behind MySam, an open source Siri Recently I published the very first version of MySam, an open “intelligent” assistant for the web similar to Siri or Google Now.

HTML5, NodeJS and Neural Networks: The tech behind MySam, an open source Siri

Unlike those however, you can teach Sam yourself, it works in many modern browsers and it is extensible with plugins written in HTML and JavaScript. Here is a video that shows what Sam can do: It is a fun project that combines many of the open source projects I’ve recently been working on or interested in. In this short post I’d like to show how they all came together. The Brain The natural language understanding and learning process is probably the most interesting part. The NodeJS server runs natural-brain which combines node-natural, a natural language library with BrainJS, a neural network library for JavaScript. Compared to the language classification the tagging mechanism that extracts parts of a sentence is currently pretty primitive.

The API in a CURL request like this: GitHub - erelsgl/limdu: Machine-learning for Node.js. Machine Learning 10-701/15-781: Lectures.