VOYAGES OF THE SEMANTIC ENTERPRISE
SPARQL Web Pages I've written here before about how SPARQL Web Pages (SWP) let you convert your RDF to HTML or XML by embedding SPARQL queries into the appropriate markup. In that very simple example, I showed how to create a web page for an address book entry and then display it both in TopBraid Composer and in a regular web browser. Today I'm going to show how I did something similar to display a single Person instance from the Kennedys sample data included with TopBraid Composer and then defined a page that showed all the people in that data model. You can download and try the project here .
A recent Dilbert strip inspired me to go through Dave Winer's EC2 for Poets tutorial as a geeky weekend project. It was surprisingly easy and inexpensive to get a computer image running in Amazon's "Elastic Computing Cloud" (EC2) and to then get a copy of TopQuadrant's TopBraid Live running in that image. These images are cheap to run, as you can see on their price list . Note that the cost per hour of running a default Linux image is not eighty-five cents an hour when using servers in northern Virginia, but eight and a half cents. (It's an additional penny an hour when using their servers in California, Ireland, or Singapore.) Running TopBraid Live in the Amazon EC2 Cloud
Many enterprise information models are expressed using XML Schemas. Data between applications is commonly exchanged in XML, compliant with those schemas. Connecting XML data from different systems in a coherent aggregated way is a challenge that confronts many organizations. Capabilities of RDF/OWL to describe semantics of different data models and aggregate disparate data are a natural fit for addressing these challenges. For a number of years now, TopBraid Composer included the ability to convert XSDs and associated XML files to RDF/OWL. Living in the XML and OWL World - Comprehensive Transformations of XML Schemas and XML data to RDF/OWL
SPARQL endpoint SPARQL endpoints are an increasingly popular way to expose linked data. Invoking SPARQL Endpoints from TopBraid Composer's SPARQL view was the subject of a previous TQ blog on SPARQL Endpoints .In this entry we will discuss how TopBraid Live can be used to implement a SPARQL Endpoint using TopBraid Live. SPARQL Endpoints are Web services that conform to the SPARQL protocol .
How to convert a spreadsheet to SKOS In an earlier entry, we learned how SPARQL Rules can increase the quality of taxonomies and other controlled vocabularies stored using the W3C SKOS ontology. (As I wrote there, the Simple Knowledge Organization System vocabulary management specification is gaining popularity because, as a standard, it makes it easier to share taxonomies and thesaurii between different systems. It also guards investments in vocabulary development against the potential problems of dependence on a proprietary vendor format.) TopQuadrant's Enterprise Vocabulary Net (EVN) vocabulary manager uses SKOS as its default format for storing data. Whether you use EVN or not, a first step in systematic management of vocabularies is often the conversion of vocabularies stored in ad hoc spreadsheets—an unfortunately very popular way to store them—to SKOS, so today we'll look at how TopBraid makes this conversion easy.
Since release 3.3, TopBraid Composer has included an interactive debugger for SPARQLMotion scripts that can make your development go much faster. TopQuadrant VP of Product Development Holger Knublauch wrote a nice overview of the debugger's features in his blog; below is a short hands-on tutorial in the use of the debugger. We're going to put together a short SPARQLMotion script with a problem that prevents it from running properly. Experienced SPARQLMotion developers may notice the problem when we add it, but leave it in there—we'll see how the SPARQLMotion debugger helps us locate it. Creating our script How to use the SPARQLMotion Debugger
Yesterday a question about how ontologies may be different from logical data models was asked by a newcomer on TopBraid Users Forum . As to be expected on the TopBraid Forum, by ontologies he meant specifically ontology models expressed in RDFS/OWL. Because we frequently hear this or similar questions in our trainings, workshops and in conversations with customers, I decided to respond in a blog post instead of writing an e-mail. Data modeling was invented more than thirty years ago to help with the design of databases, specifically, relational databases. As quoted below, ANSI definition from 1975 differentiated between three data models – conceptual, logical and physical. Ontologies and Data Models – are they the same?
Converting UML Models to OWL - Part I Convert UML to OWL - why would you ever want to do this? One reason suffices: many enterprise models, that serve as either standards or enterprise schemas, are specified in UML. Increasingly, there is interest in having content of UML models re-purposed in RDF/OWL and the need for RDF/OWL to interoperate with systems built from UML Models. UML Models are notoriously hard to exchange between UML tools, let alone be transformed into OWL. The exchange format XMI is not only is difficult to understand but also has vendor-specific extensions.
VOYAGES OF THE SEMANTIC ENTERPRISE
S is for Semantics: Stop Press: Microdata in TopBraid Back during the RSS wars , there was heated disagreement about the format for RSS - among the disagreements was whether RSS should be RDF-based or not. The history is sordid, but from the point of view of someone building a linked data hub like the TopBraid Suite, we could be agnostic about this. Sure, it was great that RSS 1.0 was formatted as good RDF, so that any RDF reader could read it. But the formats aren't that different - at the time, I found a rosetta stone (Yahoo! was publishing identical feeds in four formats; RSS 0.9x, 1.0, 2.0 and Atom), and gave it to one of my programmers, asking him to create an RSS importer for the TopBraid Suite that would turn all of them into the same triples.
S is for Semantics
Composing the Semantic Web
Validating schema.org Microdata with SPIN The new 3.5.1 version of TopBraid Composer introduces some initial features to import, browse, edit and analyze Microdata. I wrote about this in a previous blog entry - if you want to try those features just download TBC's evaluation version, keeping in mind that Microdata support is still at an early stage and that, for example, the parser isn't fast yet. Today I will focus on a SPARQL-based approach for validating schema.org Microdata using SPIN inside of TopBraid. I have published a library of SPIN constraints at http://topbraid.org/spin/schemaspin .