RDFa Primer We begin the introduction to RDFa by using a subset of all the possibilities called RDFa Lite 1.1 [rdfa-lite]. The goal, when defining that subset, was to define a set of possibilities that can be applied to most simple to moderate structured data markup tasks, without burdening the authors with additional complexities. Many Web authors will not need to use more than this minimal subset. 2.1.1 The First Steps: Adding Machine-Readable Hints to Web Pages Consider Alice, a blogger who publishes a mix of professional and personal articles at We will construct markup examples to illustrate how Alice can use RDFa. A more complete markup of these examples is available on a dedicated page.
Watson Semantic Web Search This is the Watson Web interface for searching ontologies and semantic documents using keywords. This interface is subject to frequent evolutions and improvements. If you want to share your opinion, suggest improvement or comment on the results, don't hesitate to contact us... At the moment, you can enter a set of keywords (e.g. "cat dog old_lady"), and obtain a list of URIs of semantic documents in which the keywords appear as identifiers or in literals of classes, properties, and individuals.
Grassroots Programs - Annual2009 In addition to programs sponsored by divisions, round tables and committees, this year's Annual Conference features 10 "Grassroots Programs" selected by a jury of library school students and practitioners from 118 proposals. These Grassroots Programs are part of ALA president Jim Rettig's presidential initiative to increase opportunities for members to participate in, contribute to and benefit from their association. The purpose of this initiative has been to broaden opportunities for ALA members to present programs at the Annual Conference and to compress the planning schedule to accommodate programs on very current issues. View the list of Grassroots Program (goes to the ALA website)
RDFa, Drupal and a Practical Semantic Web In the march toward creating the semantic web, web content management systems such as Drupal (news, site) and many proprietary vendors struggle with the goal of emitting structured information that other sites and tools can usefully consume. There's a balance to be struck between human and machine utility, not to mention simplicity of instrumentation. With RDFa (see W3C proposal), software and web developers have the specification they need to know how to structure data in order to lend meaning both to machines and to humans, all in a single file. And from what we've seen recently, the Drupal community is making the best of it. Introducing RDFa RDFa is a set of XHTML attributes meant in particular to augment visual data with machine-readable hints.
The State of Linked Data in 2010 In May last year we wrote about the state of Linked Data, an official W3C project that aims to connect separate data sets on the Web. Linked Data is a subset of the wider Semantic Web movement, in which data on the Web is encoded with meaning using technologies such as RDF and OWL. The ultimate vision is that the Web will become much more structured, which opens up many possibilities for "smarter" Web applications. At this stage last year, we noted that Linked Data was ramping up fast - evidenced by the increasing number of data sets on the Web as at March 2009.
SweoIG/TaskForces/CommunityProjects/LinkingOpenData - ESW Wiki News 2014-12-03: The 8th edition of the Linked Data on the Web workshop will take place at WWW2015 in Florence, Italy. The paper submission deadline for the workshop is 15 March, 2015. 2014-09-10: An updated version of the LOD Cloud diagram has been published. The new version contains 570 linked datasets which are connected by 2909 linksets. Using Dublin Core - The Elements NOTE: This text was last revised in 2005. As of 2011, a completely revised User Guide is being developed at the wiki page DCMI's Glossary and FAQ are also under revision. This section lists each element by its full name and label. For each element there are guidelines to assist in creating metadata content, whether it is done "from scratch" or by converting an existing record in another format.
DBpedia DBpedia ( from "DB" for "database" ) is a project aiming to extract structured content from the information created as part of the Wikipedia project. This structured information is then made available on the World Wide Web. DBpedia allows users to query relationships and properties associated with Wikipedia resources, including links to other related datasets. DBpedia has been described by Tim Berners-Lee as one of the more famous parts of the decentralized Linked Data effort. Background The project was started by people at the Free University of Berlin and the University of Leipzig, in collaboration with OpenLink Software, and the first publicly available dataset was published in 2007. It is made available under free licences, allowing others to reuse the dataset. Dataset
Primer - Getting into the semantic web and RDF using N3 [translations into other languages ] The world of the semantic web, as based on RDF, is really simple at the base. This article shows you how to get started. It uses a simplified teaching language -- Notation 3 or N3 -- which is basically equivalent to RDF in its XML syntax, but easier to scribble when getting started. Subject, verb and object
How DBpedia Treats Wikipedia as a Database - ReadWriteCloud DBpedia is a community driven effort that treats Wikipedia like a database, enabling people to do more sophisticated queries, distribute the open encyclopedia's data to the Web and add back to Wikipedia for the purposes of enriching it. In a blog post this week, the community showed again what makes the service a unique effort with the launch of the latest version of the technology. You can get into the weeds pretty quick with DBpedia when seeking to better understand how it functions. We see it useful to think of it in terms of context. On Wikipedia, you can do keyword searches for the Rhine River in Germany. But you can't ask it questions that have more context such as the rivers that flow into the Rhine that are longer than 50 kilometers.
Open Archives Initiative - Protocol for Metadata Harvesting - v.2.0 Editors The OAI Executive:Carl Lagoze <firstname.lastname@example.org > -- Cornell University - Computer Science Herbert Van de Sompel <email@example.com > -- Los Alamos National Laboratory - Research Library From the OAI Technical Committee:Michael Nelson <firstname.lastname@example.org > -- NASA - Langley Research Center Simeon Warner <email@example.com > -- Cornell University - Computer Science Table of Contents 1. Introduction2.
data provenance First Drafts of Three Audio API Specifications Published 15 December 2011 The Audio Working Group has published three First Public Working Drafts to provide an advanced audio API for the Web: the Web Audio API and MediaStream Processing API specifications each define a different approach to process and synthesize audio streams directly in script.