Drspeedo/vivoconf2013-dataingest. Linked Data. Finding Europeana audio with SPARQL. As a SPARQL geek's alternative to YouTube, the 166,872 video resources with a edm:type value of "VIDEO" look like a tempting way to kill some time. When I first heard about the SPARQL endpoint for the Europeana aggregation of data about European cultural artifacts, the first example I heard about was an MP3 audio file of a Slovenian version of O sole mio. I happened to be in the middle of packing for a family visit over Christmas and immediately tweeted "Lots of holiday stuff to do, but the new Ontotext Europeana SPARQL endpoint points to MP3s! So tempting... " This past Sunday morning I finally made some time to explore it more, and I found 6,219 audio files. The following query pulls down data about 100 of them (which 100 you pull depends on the OFFSET value), and this XSLT stylesheet converts a SPARQL XML query result version of the results to a simple HTML file that shows the title, creator, and source of each one, with the title being a hypertext link to the audio file itself.
Richard Cyganiak's Homepage. ConverterToRdf - W3C Wiki. A Converter to RDF is a tool which converts application data from an application-specific format into RDF for use with RDF tools and integration with other data. Converters may be part of a one-time migration effort, or part of a running system which provides a semantic web view of a given application.
See also: RDFImportersAndAdapters Please add converters as you make them or hear of them. Formats in alphabetical order: BibTex BibTex is the format for bibliographic references in TeX. BibBase transforms BibTeX files (given in a URL) into Linked Data with RDF/XML output support. Bittorrent is alas now 404 (in 2007). CSV (Comma-Separated Values) See also: Flat Files and TSV An RDF Extension is available for Google Refine. Debian The package information in Debian and similar systems (Ubuntu, Fink, etc), with its general usefulness and its graph-like nature, is a clear candidate for conversion to RDF. See VitaVoni blog about this. Excel See JPEG. File Systems. MARC Code List for Relators (Network Development and MARC Standards Office, Library of Congress) NOTE: The MARC Code Lists for Relators, Sources, Description Conventions have been reorganized.
Relator terms and codes can be accessed on this page (below). The source code list parts can now be accessed at Source Codes for Vocabularies, Rules, and Schemes. List identifier: marcrelator The purpose of this list of relator terms and associated codes is to allow the relationship between an agent and a resource to be designated in bibliographic records. The relator code list is in two sections: Term Sequence - A list of standard relator terms with codes for terms, references from unused terms, and descriptions/definitions of term concepts. Code Sequence - A list of valid and obsolete relator codes with their associated term. Alternative Access The terms and codes on this list may also be accessed via the Library of Congress Linked Data Service (LDS) at id.loc.gov/vocabulary/relators.
The distiller can also run in XHTML+RDFa 1.0 mode (if the incoming XHTML content uses the RDFa 1.0 DTD and/or sets the version attribute). The package available for download, although it may be slightly out of sync with the code running this service. If you intend to use this service regularly on large scale, consider downloading the package and use it locally. Storing a (conceptually) “cached” version of the generated RDF, instead of referring to the live service, might also be an alternative to consider in trying to avoid overloading this server… What is it?
RDFa 1.1 is a specification for attributes to be used with XML languages or with HTML5 to express structured data. As installed, this service is a server-side implementation of RDFa. Distiller options Output format (option: format; values: turtle, xml, json, nt; default: turtle) The default output format is Turtle.
Text/html: HTML5+RDFa. SPARQL 1.1 Federated Query. Abstract RDF is a directed, labeled graph data format for representing information in the Web. SPARQL can be used to express queries across diverse data sources, whether the data is stored natively as RDF or viewed as RDF via middleware. This specification defines the syntax and semantics of SPARQL 1.1 Federated Query extension for executing queries distributed over different SPARQL endpoints. The SERVICE keyword extends SPARQL 1.1 to support queries that merge data distributed across the Web. 1 Introduction The growing number of SPARQL query services offer data consumers an opportunity to merge data distributed across the Web. 1.1 Document Conventions 1.1.2 Result Descriptions Result sets are illustrated in tabular form as in the SPARQL 1.1 Query document.
A 'binding' is a pair (variable, RDF term). 2 SPARQL 1.1 Federated Query Extension The SERVICE keyword instructs a federated query processor to invoke a portion of a SPARQL query against a remote SPARQL endpoint. Query: Query Result: then: ? VIVO Data - what and from where - VIVO. Introduction You've looked at VIVO, you've seen VIVO in action at other universities or organizations, you've downloaded and installed the code. What next? How do you get information about your institution into your VIVO? The answer may be different everywhere – it depends on a number of factors. How big is your organization? Some smaller ones have implemented VIVO only through interactive editing – they enter every person, publication, organizational unit, grant, and event they wish to show up, and the keep up with changes "manually" as well.
This approach works well for organizations with under 100 people or so, especially if you have staff or student employees who are good at data entry and enjoy learning more about the people and the research. Next – what is different about data in VIVO? As we've described, it's well worth learning the VIVO editing environment and creating sample data even if you know you will require an automated approach to data ingest and update. Further topics. Symplectic/vivo. BabelNet 2.0 - A very large multilingual encyclopedic dictionary and ontology.
Extending Google Refine for VIVO - VIVO. Dr. Curtis L. Cole, Dan Dickinson, Kenneth Lee, Eliza Chan Weill Cornell Medical College Google Refine (previously Freebase Gridworks) is a freely available open source software package for manipulating datasets. One of Google Refine’s unique features is tight integration with the Freebase database. Google Refine’s integration with Freebase allows users to “reconcile” data in a grid format against entities found within the Freebase graph. Google Refine also allows a dataset to be aligned to the Freebase schema, to convert grid data into graph data, and then export it into a triple format importable by Freebase. Weill Cornell Medical College proposes to enhance both Google Refine and VIVO to allow integration between the two systems, similar to the existing integration with Freebase. Section I: VIVO servlet - Reconciliation service The VIVO reconciliation service is a Java HttpServlet that parses requests from Google Refine and returns query results back to Google Refine.
Summary 1. 1. A Direct Mapping of Relational Data to RDF. 1 Introduction Relational databases proliferate both because of their efficiency and their precise definitions, allowing for tools like SQL [SQLFN] to manipulate and examine the contents predictably and efficiently. Resource Description Framework (RDF) [RDF-concepts] is a data format based on a web-scalable architecture for identification and interpretation of terms. This document defines a mapping from relational representation to an RDF representation. Strategies for mapping relational data to RDF abound.
The direct mapping defines a simple transformation, providing a basis for defining and comparing more intricate transformations. This document includes an informal and a formal description of the transformation. The Direct Mapping is intended to provide a default behavior for R2RML: RDB to RDF Mapping Language [R2RML]. 2 Direct Mapping Description (Informative) The direct mapping defines an RDF Graph [RDF-concepts] representation of the data in a relational database.
Bibliographic Ontology Usecase Examples | The Bibliographic Ontology. Reconciling DERI researchers using Sindice | GRefine RDF Extension. In this example, I will reconcile a list of people working at the Digital Enterprise Research Institute DERI with the help of Sindice. The CSV file can be downloaded. Create a Google Refine project from the CSV file. A snippet is shown in the figure below. From the "name" column drop down menu, select Discover related RDF datasets as shown in the figure below. This will query Sindice for datasets containing RDF data about the first ten rows. Querying Sindice might take some time (up to 5 minutes). The result is a list of domains containing related data as shown below. Alternatively, Sindice domain-specific services can be directly added through the RDF menu as shown below. For more details and technical documentation see Reconciliation using Sindice. VIVO core | AgriVIVO ontology.