background preloader

Python & semantic

Facebook Twitter

Semantic Python Scripting. Review for Python Text Processing with NLTK 2.0 Cookbook (Packt, 2010) Python Text Processing with NLTK 2.0 Cookbook (Amazon US, UK) is a cookbook for Python’s Natural Language Processing Toolkit.

Review for Python Text Processing with NLTK 2.0 Cookbook (Packt, 2010)

I’d suggest that this book is seen as a companion for O’Reilly’s Natural Language Processing with Python (available for free at nltk.org). The older O’Reilly book gives a lot of explanation for how to use NLTK’s component, Packt’s new book shows you lots of little recipes which build to larger projects giving you a great hands-on toolkit. Overall the book is easy to read, has a huge set of sample recipes and feels very useful. I’ll be referring to it for our upcoming @socialties mobile app. You’ll need to download NLTK, you can also refer to some sample articles at Packt’s site and get Chapter 3 as a free PDF (see below). Here are my thoughts on the book. Chapter 1: Tokenizing Text and WordNet Basics If you haven’t tried tokenising text before you may not realise how complicated it can be (expressing even basic rules for English is jolly hard!). CubicWeb Semantic Web Framework. Getting started with rdflib — rdflib v3.0.0 documentation.

Create an Rdflib Graph You might parse some files into a new graph (Introduction to parsing RDF into rdflib graphs) or open an on-disk rdflib store. from rdflib.graph import Graphg = Graph()g.parse(" LiveJournal produces FOAF data for their users, but they seem to use foaf:member_name for a person’s full name.

Getting started with rdflib — rdflib v3.0.0 documentation

For this demo, I made foaf:name act as a synonym for foaf:member_name (a poor man’s one-way owl:equivalentProperty): from rdflib.namespace import NamespaceFOAF = Namespace(" FOAF['name'], n)) for s,_,n in g.triples((None, FOAF['member_name'], None))] Run a Query The rdflib package concentrates on providing the core RDF types and interfaces for working with RDF. In order to perform SPARQL queries, you need to install the companion rdfextras package which includes a SPARQL plugin implementation: In order to use the SPARQL plugin in your code, the plugin must first be registered.

Continuing the example... The results are tuples of values in the same order as your SELECT arguments. Namespaces. Getting data from the Semantic Web. This tutorial is for programmers used to building software on top of non-Semantic-Web data sources: using screen scraping techniques, or using APIs that return XML, JSON, CSV etc.

Getting data from the Semantic Web

Getting data from Semantic Web sources is typically done in one of two ways: either directly getting data in an RDF serialization over HTTP or by using a SPARQL endpoint. In this tutorial, we shall get some data from DBPedia, the Semantic Web version of Wikipedia. [edit] Getting RDF data directly Some websites produce RDF data that is available in one of the many RDF serializations.

The two most common RDF serializations that you need to worry about at the moment are RDF/XML and RDFa. When you first see RDF/XML, you may find it especially hard to understand compared to 'normal' XML: often it is machine-produced and contains some unfamiliar constructs. Fortunately, there are a variety of tools you can use to get at RDF data. Sudo easy_install -U "rdflib>=3.0.0" from rdflib import Graph, URIRef g = Graph() g.parse(" Mnot/sparta - GitHub. Testing — NetworkX v1.4 documentation. Overview — NetworkX v1.4 documentation. Un projet Python : de l'idée à la publication, dans python. J'aime coder en Python mais pas uniquement pour l'esthétique ou la puissance du langage, j'apprécie l'écosystème me permettant de concrétiser une idée en quelques commandes.

Un projet Python : de l'idée à la publication, dans python

Je vais prendre un exemple concret avec l'idée du jour qui était de créer un triple store à partir de redis_graph suite à un tweet de Régis Gaidot. Au final, je n'aurais pas utilisé redis_graph car l'implémentation utilisant des sets était limitante et je préférais avoir recours à des hashs mais ce n'est pas l'objet du billet. Je vais essayer de décrire commande par commande ce que j'ai pu faire cette après midi. Initialisation On commence par installer redis, pour ça l'utilisation de homebrew sur mac est vraiment une bénédiction quand on a connu la puissance d'apt-get... $ brew install redis Et ensuite il faut un peu deviner comment lancer le serveur (bon ok, ou lire la doc), tiens il y a des redis-* dans mon path, essayons redis-server, bingo ! $ pip freeze > requirements.txt Développement Publicaiton Et voilà !