background preloader

Web semantic

Facebook Twitter

Web sémantique. Définir une ontologie avec OWL. De nombreux langages informatiques sont apparus pour construire et manipuler des ontologies. Dans le but de mettre au point un langage standardisé, le W3C a créé en novembre 2001 un groupe de travail, WebOnt rassemblant les acteurs du domaine dont la DARPA (Defense advanced Research Projects Agency) qui avait mis au point le langage DAML+OIL basé sur XML et RDF. Le travail de ce groupe a abouti à la recommandation OWL en février 2004. OWL définit donc une syntaxe RDF pour décrire et construire des vocabulaires pour créer des ontologies. On pourrait le comparer à XML schéma pour définir des grammaires XML. Il existe donc deux langages basées sur RDF pour définir des vocabulaires : OWL et RDF schema. Il existe 3 déclinaisons de OWL : OWL Lite, OWL DL et OWL Full. Une classe, c'est à dire un groupe d'individus partageant les mêmes caractéristiques. Quelques ressources sur OWL (tutoriel, article Tom Gruber, What is an Ontology ?

SKOS Simple Knowledge Organization System - home page. SKOS is an area of work developing specifications and standards to support the use of knowledge organization systems (KOS) such as thesauri, classification schemes, subject heading lists and taxonomies within the framework of the Semantic Web ... [read more] Alignment between SKOS and new ISO 25964 thesaurus standard (2012-12-13) ISO 25964-1, published in 2011, replaced the previous thesaurus standards ISO 2788 and ISO 5964 (both now withdrawn). Members of the Working Group responsible for ISO 25964 have gone on to consider the implications for SKOS users. They have developed a set of linkages between the elements of the ISO 25964 data model and the ones from SKOS, SKOS-XL, and MADS/RDF. From Chaos, Order: SKOS Recommendation Helps Organize Knowledge (2009-08-18) Call for Review: SKOS Reference Proposed Recommendation (2009-06-15) The Semantic Web Deployment Working Group has published the Proposed Recommendation of SKOS Simple Knowledge Organization System Reference.

Linked Geospatial Data 2014 Workshop, Part 1: Web Services or SPARQL Modeling? | GeoKnow Blog. The W3C (World Wide Web Consortium) and OGC (Open Geospatial Consortium) organized the Linked Geospatial Data 2014 workshop in London this week. The GeoKnow project was represented by Claus Stadler of Universität Leipzig, and Hugh Williams and myself (Orri Erling) from OpenLink Software. The Open Knowledge Foundation (OKFN) also held an Open Data Meetup in the evening of the first day of the workshop. Reporting on each talk and the many highly diverse topics addressed is beyond the scope of this article; for this you can go to the program and the slides that will be online. Instead, I will talk about questions that to me seemed to be in the air, and about some conversations I had with the relevant people. The trend in events like this is towards shorter and shorter talks and more and more interaction. In this workshop, talks were given in series of three talks with all questions at the end, with all the presenters on stage.

Web services or SPARQL? How should geometries be modeled? S is for Semantics. CoursWebSem. Linked Data. Www.w3.org/2010/Talks/0322-egov-sandro/talk.pdf. Pub/DataTuesdayJan2012.pdf. IT (513) Home | OpenCalais. Big Data: buzzword or revolution? | Galigeo Blog. What? You’re in IT and don’t know about Big Data?! Unless you have lived in a box for the last two years, if you have even the slightest interest in IT (and not just business intelligence), you have heard about “Big Data” and most likely about “Hadoop” and “NoSQL”.

Right? When new terms last longer than a year, we can assume that they are more than sales and marketing spiel, that they carry some real substance. What is it again? Big Data: In a nutshell, it’s 3V + 2V! A third component then appeared, HBase, a column-oriented NoSQL database that obviously runs on HDFS. Around Hadoop lives an entire ecosystem offering various interfaces: to hide some complexities, like Hive, which accepts pseudo SQL to query data;to connect to the (old) relational model, like the ETL tool Sqoop; NoSQL: Otherwise known as “Not Only SQL”, a term that refers to all databases not based on the (old) RDBMS relational notion.

It should be noted that this “new world” does not cut all bridges with the past. Publication des supports des présentations 2012. De retour de WebSem Pro 2012. Introduction sur le sujet et la journée par Nicolas Chauvat (Logilab) (la forge de Logilab : Un rappel sur le Web sémantique : LA FORCE DES LIENS DANS LE WEB, les URI et les DN (global/local, décentralisé) et les liens qui unifient... LA FORCE DES DONNEES OUVERTES (dans le web, format non propriétaire) RDFa - RDF dans les attributs/balises HTML - (rdb to rdf) pour publier des données RDF à partir de BDD, soit par un alignement direct et un post-traitement, soit en utilisant le vocabulaire W3C R2RML (2 étapes en une seule : RDF to RD markup language) - , recommandation candidate depuis février 2012SPARQL 1.1. (article de wikipedia en ébauche, vraiment à revoir). Linked Enterprise Data, Fabrice Lacroix d'Antidot Retour d'expérience (le SI d'Antidot lui même) pour montrer l'intêret de ces technologies pour les SI / Intranets des entreprises.

Parmi les questions posées par l'assistance à SemWeb Pro : Généralités web sémantique. Ordnance Survey Datasets. Launching a new Linked Data service. We’ll soon be launching the next iteration of our Linked Data Service at In preparation we have created a beta version ( which has been designed for you to have a play around, test and review against your current applications. We launched Linked Data in April 2010 and have seen a continued growth of the use in government and research.

This has allowed us to develop a deeper understanding of the use of Linked Data, which we have used to develop an improved service, it’s easy to use and access adhering to new standards making the data more open. In summary, the improvements we have made are: Developed a data hub that provides access to all our Linked Data datasets, with integrated search to enable anyone to easily locate resources of interest.Embedded OS OpenSpace maps to show the geographic location chosen.Separate datasets, which will allow you to narrow down your searches.

Web 3_0 - l’évolution vers le web sémantique: l’internet intelligent. List of Thousands of Public Data Sources. A website called BigML (for Big Machine Learning) has compiled a great list of freely available public data sources. The article begins: “We love data, big and small and we are always on the lookout for interesting datasets. Over the last two years, the BigML team has compiled a long list of sources of data that anyone can use. It’s a great list for browsing, importing into our platform, creating new models and just exploring what can be done with different sets of data. In this post, we are sharing this list with you. Why? Well, searching for great datasets can be a time consuming task. The introduction continues, “Some data sources are great for complementing your own data. Read more here. Image: Courtesy BigML. Langages de communication et d'échange d'information sémantisée entre agents.

Langages de communication d'échange d'information sémantisée entre agents Ce document comporte de nombreux liens. Les liens les plus communs renvoient directement au document cité. Cependant, pour les articles et documentations détaillés dans la “petite bibliographie”, les liens sont notés entre crochets, et ils renvoient à l'entrée correspondante de la bibliographie. À son tour, cette entrée bibliographique permet d'avoir accès au document lui-même.

Dans cet exposé, nous cherchons à donner un aperçu des langages de communication entre agents. Les agents sont des composants logiciels caractérisés par leur autonomie, leur adaptativité et leur caractère coopératif. Depuis quelques années, on a cherché à permettre aux programmes informatiques de s'échanger des informations et des connaissances [IEEE99]. Les techniques suivantes ont déjà été utilisées, indépendamment des notions d'“agent” : RPC (Remote Procedure Call) : il s'agit d'appeler des procédures à distance à travers un réseau. Introduction to: Triplestores. Triplestores are Database Management Systems (DBMS) for data modeled using RDF. Unlike Relational Database Management Systems (RDBMS), which store data in relations (or tables) and are queried using SQL, triplestores store RDF triples and are queried using SPARQL. A key feature of many triplestores is the ability to do inference. It is important to note that a DBMS typically offers the capacity to deal with concurrency, security, logging, recovery, and updates, in addition to loading and storing data.

Not all Triplestores offer all these capabilities (yet). Triplestore Implementations Triplestores can be broadly classified in three types categories: Native triplestores, RDBMS-backed triplestores and NoSQL triplestores. Native triplestores are those that are implemented from scratch and exploit the RDF data model to efficiently store and access the RDF data. RDBMS-backed triplestores are built by adding an RDF specific layer to an existing RDBMS. Triplestores and Inferencing Summary. Joinup. Blog Le thesaurus à l’ère du web sémantique. La norme ISO 25964, consacrée à la construction du thesaurus, va paraître en juin 2011. Elle sera suivie d’une 2e partie en 2012 centrée sur les questions d’interopérabilité avec d’autres vocabulaires contrôlés (thesaurus, mais aussi taxonomies, schéma de classification, référentiels pour le records management, ontologie, etc) Elle remplace les normes Afnor et ISO publiées entre 1985 et 1988, ainsi que les normes anglaise BSI 8723 et américaine ANSI/NISO Z39.19 qui étaient plus récentes (2004-2008).

Qu’est-ce qu’un thesaurus? Le thesaurus est un outil utilisé pour faciliter l’indexation et la recherche d’information, quel que soit le type de ressources. Usages d’un thesaurus Un thesaurus peut notamment être utilisé pour : Thesaurus et web sémantique La nouvelle mouture de la norme s’inscrit résolument dans l’ère du sémantique avec : La norme positionne clairement le rôle du thesaurus. Quelques illustrations Arborescence hiérarchique : Un artiste : Linked Data on the Web Workshop at WWW 2012. This year was the 5th version of the Linked Data on the Web Workshop co-located at the World Wide Web Conference going on in Lyon, France. At this workshop, seven issues caught my attention: 1) Media: Yunja Li presented on Synote: Weaving Media Fragments and Linked Data.

This is interesting for those who not only want to link to an entire video, but want to link to a part of a video at a specific interval of time, and also add metadata information about that. 2) NLP to Linked Data: How can we relate the results of different named entity extraction tools to Linked Data. Giuseppe Rizzo introduced their project, NERD, which is working on this area. 3) Provenance: Data cannot live alone; it needs provenance. We need to know where data come from, what level of trust do they have, in what context, etc. 4) Federated Queries: A key topic in data integration is the ability to federate queries. 5) Schema Matching: Another key topic in data integration is matching heterogeneous schemas.

Gestion de données à l'heure de la toile. Semantic Models for CDISC. Kerstin Forsberg has written an article discussing a presentation she and Frederik Malfait gave regarding the use of semantic models for CDISC-based standard and metadata management, pointing to case studies at AstraZeneca and Roche. She writes, “In AstraZeneca we have a new program called Integrative Informatics (i2) establishing the components required to let a linked data cloud grow across R&D. A key component is the URI policy for how to make for example a Clinical Study linkable by giving it a URI, that is a Uniform Resource Identifier, e.g. This is an identifier for a clinical study with the study code D5890C00003 that should be persistent and not dependent on any system. In the same way we will give guidance on how to use URI:s to make other key entities such as Investigator and Lab linkable.”

She continues, “Frederik described the schema, content and architecture of Roche Biomedical MDR. NoSQL. Un article de Wikipédia, l'encyclopédie libre. En informatique, NoSQL désigne une famille de systèmes de gestion de base de données (SGBD) qui s'écarte du paradigme classique des bases relationnelles. L'explicitation du terme la plus populaire de l'acronyme est Not only SQL (« pas seulement SQL » en anglais) même si cette interprétation peut être discutée[1]. La définition exacte de la famille des SGBD NoSQL reste sujette à débat. Le terme se rattache autant à des caractéristiques techniques qu'à une génération historique de SGBD qui a émergé à la fin des années 2000/début des années 2010[2]. D'après Pramod J. Sadalage et Martin Fowler, la raison principale de l'émergence et de l'adoption des SGBD NoSQL serait le développement des clusters de serveurs et la nécessité de posséder un paradigme de bases de données adapté à ce modèle d'infrastructure matérielle[3].

Éléments historiques[modifier | modifier le code] La domination historique des SGBD relationnels[modifier | modifier le code] C News Archive: 2012 W3C. High Resolution Time, and Navigation Timing are W3C Recommendations 17 December 2012 The Web Performance Working Group has published two W3C Recommendations today. Navigation Timing. This specification defines an interface for web applications to access timing information related to navigation and elements.High Resolution Time.

Learn more about the Rich Web Client Activity. HTML5 Definition Complete, W3C Moves to Interoperability Testing and Performance W3C published today the complete definition of the HTML5 and Canvas 2D specifications. To reduce browser fragmentation and extend implementations to the full range of tools that consume and produce HTML, W3C now embarks on the stage of W3C standardization devoted to interoperability and testing.

The HTML Working Group also published first drafts of HTML 5.1, HTML Canvas 2D Context, Level 2, and main element, providing an early view of the next round of standardization. Guidance on Applying WCAG 2.0 to Non-Web ICT: Updated Draft Published. What is Linked Data. Level: introductory In the early 1990s there began to emerge a new way of using the internet to link documents together. It was called the World Wide Web. What the Web did that was fundamentally new was that it enabled people to publish documents on the internet and link them such that you could navigate from one document to another. Part of Sir Tim Berners-Lee’s original vision of the Web was that it should also be used to publish, share and link data.

This aspect of Sir Tim’s original vision has gained a lot of momentum over the last few years and has seen the emergence of the Linked Data Web. The Linked Data Web is not just about connecting datasets, but about linking information at the level of a single statement or fact. RDF is a standard from the World Wide Web Consortium (W3C) and it provides a very simple way of encoding data that is based around making a series of statements about resources. Using URIs the triple “John is based near Southampton” would look something like: Introduction to the Semantic Web. Les technologies du Web sémantique. SemWeb. Web sémantique RDF Data & Metadata.