Webscraping with Python

Facebook Twitter

Mechanize – Writing Bots in Python Made Simple by Guy Rutenberg. I’ve been using python to write various bots and crawler for a long time.

mechanize – Writing Bots in Python Made Simple by Guy Rutenberg

Few days ago I needed to write some simple bot to remove some 400+ spam pages in Sikumuna, I took an old script of mine (from 2006) in order to modify it. The script used ClientForm, a python module that allows you to easily parse and fill html forms using python. I quickly found that ClientForm is now deprecated in favor of mechanize. In the beginning I was partly set back by the change, as ClientForm was pretty easy to use, and mechanize‘s documentation could use some improvement.

However, I quickly changed my mind about mechanize. For future reference for myself, and as another code example to mechanizes sparse documentation I’m giving below the gist of the simple bot I wrote: This isn’t a complete code example, as the rest of the code is just mundane, but you can clearly see how simple it is to use mechanize. The interesting parts are: Related.

Mechanize — Documentation. Full API documentation is in the docstrings and the documentation of urllib2.

mechanize — Documentation

The documentation in these web pages is in need of reorganisation at the moment, after the merge of ClientCookie and ClientForm into mechanize. Tests and examples Examples The front page has some introductory examples. The examples directory in the source packages contains a couple of silly, but working, scripts to demonstrate basic use of the module. See also the forms examples (these examples use the forms API independently of mechanize.Browser).

Tests To run the tests: python test.py There are some tests that try to fetch URLs from the internet. Python test.py discover --tag internet The urllib2 interface mechanize exports the complete interface of urllib2. Import mechanizeresponse = mechanize.urlopen(" response.read() Compatibility These notes explain the relationship between mechanize, ClientCookie, ClientForm, cookielib and urllib2, and which to use when. API differences between mechanize and urllib2: John J. Mechanize. Stateful programmatic web browsing in Python, after Andy Lester’s Perl module WWW::Mechanize.

mechanize

The examples below are written for a website that does not exist (example.com), so cannot be run. There are also some working examples that you can run. import reimport mechanize br = mechanize.Browser()br.open(" follow second link with element text matching regular expressionresponse1 = br.follow_link(text_regex=r"cheese\s*shop", nr=1)assert br.viewing_html()print br.title()print response1.geturl()print response1.info() # headersprint response1.read() # body br.select_form(name="order")# Browser passes through unknown attributes (including methods)# to the selected HTMLForm.br["cheeses"] = ["mozzarella", "caerphilly"] # (the method here is __setitem__)# Submit current form. . # print currently selected form (don't call .submit() on this, use br.submit())print br.form You may control the browser’s policy by using the methods of mechanize.Browser’s base class, mechanize.UserAgent.

Julian_Todd / Python mechanize cheat sheet. Scrape.py. Scrape.py is a Python module for scraping content from webpages.

scrape.py

Using it, you can easily fetch pages, follow links, and submit forms. Cookies, redirections, and SSL are handled automatically. (For SSL, you either need a version of Python with the socket.ssl function, or the curl command-line utility.) scrape.py does not parse the page into a complete parse tree, so it can handle pages with sloppy syntax. You are free to locate content in the page according to nearby text, tags, or even comments. You can download the module or read the documentation page. Here's a quick walkthrough. Fetching a page To fetch a page, you call the go(url) method on a Session object. >>> from scrape import * >>> s.go(' 0:25751> The result is a Region object spanning the entire retrieved document (all 25751 bytes).

After any successful fetch, the session's doc attribute also contains the document. >>> d = s.doc >>> print d.content[:70]<! Extracting content A Region object can be sliced, just like a string. Setting up Aptana Studio 3 for Django. I am describing how to set up Aptana Studio 3 with Django.

Setting up Aptana Studio 3 for Django

I had to decide among a multitude of different development environments (IDE for Integrated Development Environment). My choice fell on Aptana Studion 3 as it integrates quite nicely with Python, Django and offers text highlighting for Javascript, HTML and CSS (all that besides being open source). Please note that I am using Windows 7 and will describe the installation process on this system. Install Python & Django First of all we need to install Python on the system. Django is currently not working with Python 3.x so we have to install the 2.x version which is supported from version 2.4. To install Django you need to download the latest version from the Django Project website (at time of writing the latest official version is 1.3.1). Install Aptana Studio 3 Aptana Studio 3 already includes PyDev for programming in Python. After installing Aptana Sudio 3, the Python path needs to be set.

Python recipe: grab page, scrape table, download file . palewire. Here's a change of pace.

python recipe: grab page, scrape table, download file . palewire

Our first few lessons focused on how you can use Python to goof with a bunch of local files. This time we're going to try something different: using Python to go online and screw around with the Web. Whenever I caucus with aspiring NICARians and other data hungry reporters, it's not long before the topic of web scraping comes up. While automated text processing and database management may sound well and good, there's something sexy about pulling down a fatty government database that catches people's imagination and inspires them to take on the challenge of learning a new programming language. Or at least entertain the idea until they run into a road block. A number of fellow travelers do a noble job instructing people on the basics during NICAR's annual seminars.

But before we get going, let me just say that I'm going to assume you read the first couple recipes and won't be working too hard to explain the stuff covered there. 1. 2. 3. 4. . #! #! Phew. . #! #! Voila. Python 2.7 Pt 1 [Getting started] Python 2.7 Pt 2 [Tuple/List] Python 2.7 Pt 3 [Dictionary / String Manip] Python 2.7 Pt 4 [Conditional Expressions] Python 2.7 Pt 5 [Looping] BeginnersGuide/NonProgrammers. An open source web scraping framework for Python.

Tutorial — Scrapy 0.15.1 documentation. In this tutorial, we’ll assume that Scrapy is already installed on your system.

Tutorial — Scrapy 0.15.1 documentation

If that’s not the case, see Installation guide. We are going to use Open directory project (dmoz) as our example domain to scrape. This tutorial will walk you through these tasks: Creating a new Scrapy projectDefining the Items you will extractWriting a spider to crawl a site and extract ItemsWriting an Item Pipeline to store the extracted Items Scrapy is written in Python. Creating a project¶ Before you start scraping, you will have set up a new Scrapy project. Scrapy startproject tutorial This will create a tutorial directory with the following contents: tutorial/ scrapy.cfg tutorial/ __init__.py items.py pipelines.py settings.py spiders/ __init__.py ... These are basically: Defining our Item¶ Items are containers that will be loaded with the scraped data; they work like simple python dicts but provide additional protecting against populating undeclared fields, to prevent typos.

Our first Spider¶ Crawling¶ Note [ ... Beautiful Soup Documentation — Beautiful Soup v4.0.0 documentation. Beautiful Soup is a Python library for pulling data out of HTML and XML files.

Beautiful Soup Documentation — Beautiful Soup v4.0.0 documentation

It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work. These instructions illustrate all major features of Beautiful Soup 4, with examples. I show you what the library is good for, how it works, how to use it, how to make it do what you want, and what to do when it violates your expectations. The examples in this documentation should work the same way in Python 2.7 and Python 3.2. You might be looking for the documentation for Beautiful Soup 3. This documentation has been translated into other languages by Beautiful Soup users: 这篇文档当然还有中文版.このページは日本語で利用できます(外部リンク)이 문서는 한국어 번역도 가능합니다. Here’s an HTML document I’ll be using as an example throughout this document. Here are some simple ways to navigate that data structure: One common task is extracting all the URLs found within a page’s <a> tags: Tag Name. Setting up Python in Windows 7. An all-wise journalist once told me that “everything is easier in Linux,” and after working with it for a few years I’d have to agree — especially when it comes to software setup for data journalism.

But … Many newsroom types spend the day in Windows without the option of Ubuntu or another Linux OS. I’ve been planning some training around Python soon, so I compiled this quick setup guide as a reference. I hope you find it helpful. Set up Python on Windows 7 Get started: 1. Note: Python currently exists in two versions, the older 2.x series and newer 3.x series (for a discussion of the differences, see this). 2. 3. Right-click Computer and select Properties.In the dialog box, select Advanced System Settings.In the next dialog, select Environment Variables.In the User Variables section, edit the PATH statement to include this: 4. That will load the Python interpreter: Press Control-Z plus Return to exit the interpreter and get back to a C: prompt. Set up useful Python packages 3. Scraping CDC flu data with Python.

Getting my flu shot this week reminded me about weekly surveillance data the Centers for Disease Control and Prevention provides on flu prevalence across the nation. I’d been planning to do some Python training for my team at work, so it seemed like a natural to write a quick Python scraper that grabs the main table on the site and turns it into a delimited text file. So I did, and I’m sharing.

You can grab the code for the CDC-flu-scraper on Github. The code uses the Mechanize and BeautifulSoup modules for web browsing and html parsing, respectively. Much of what I demonstrate here I started learning via Ben Welsh’s fine tutorial on web scraping. We’re still early in flu season, but if you watch this data each week you’ll see the activity pick up quickly. Update 10/22/2011: Ben Welsh has lent some contributions to this scraper, adding JSON output and turning it into a function.