background preloader

Wiki.dbpedia.org : About

Wiki.dbpedia.org : About

Linked Data | Linked Data - Connect Distributed Data across the Web The Tabulator (1) Tim Berners-Lee Tim coded up the original version at odd times in November and December 2005. See Links on the Semantic Web from Dec 2005 Undergraduate Research Opportunity Program (UROP) over June-August 2006 people are below. Yushin Chen "Joyce" wrote the calendar views, and incorporated the Simle timeline. Lydia Chilton Lydia is working on statistical analysis, charts, etc. Ruth Dhanaraj Ruth worked on the Tabulator in January 2006, adding the asynchronous fetching of documents during queries, etc. Adam Lerer Adam works on the back-end -- the query system, and generic stuff around the query UI. Jim Hollenbach Jim is responsible for the map view. David Sheets David wrote the RDF parser, and does a lot of architecture and release engineering. Thanks also to Dan Connolly and Ralph Swick for co-supervising students and for ideas, support, testing and encouragement. When you use these techniques on the server, the tabulator works better. pulling in data from the web as you go.

Startup America: A Campaign To Celebrate, Inspire And Accelerate Entrepreneurship Note from the editor: This is a guest post from Aneesh Chopra, United States Chief Technology Officer. During last week’s State of the Union address, President Obama challenged the Nation to out-educate, out-innovate, and out-build our competition to win the future. A critical ingredient in this endeavor is the creative spirit of the American entrepreneur that featured prominently in the President’s Strategy for American Innovation – a framework for long-term economic growth and sustainable job creation. Today, President Obama celebrated the launch of Startup America, a national (public/private) campaign to celebrate, inspire, and accelerate high-growth entrepreneurship across all corners of the country, and the formation of the Startup America Partnership to catalyze private support for entrepreneurial ecosystems. To kick-start the campaign, the Obama Administration announced 27 public and private commitments organized across five key goals: 1. Stay tuned. 2. 3. 4. 5.

Text Analysis Conference (TAC) The Text Analysis Conference (TAC) is a series of evaluation workshops organized to encourage research in Natural Language Processing and related applications, by providing a large test collection, common evaluation procedures, and a forum for organizations to share their results. TAC comprises sets of tasks known as "tracks," each of which focuses on a particular subproblem of NLP. TAC tracks focus on end-user tasks, but also include component evaluations situated within the context of end-user tasks. TAC currently hosts evaluations and workshops in two areas of research: Knowledge Base Population (KBP) TAC Workshop: November 17-18, 2014 (Gaithersburg, MD, USA) The goal of Knowledge Base Population is to promote research in automated systems that discover information about named entities as found in a large corpus and incorporate this information into a knowledge base. Summarization The TAC Summarization track will focus on summarization of scientific literature.

Alex Faaborg - &raquo; Microformats - Part 0: Introduction Have you been over hearing people talk about microformats and thought to yourself “what are those?” In this post I provide a quick introduction, and discuss the various ways that microformats are changing the Web. What are microformats? Microformats can be explained in a number of ways, but the easiest way to explain them is to just show an example. Here is my contact information in HTML: <div class=”vcard”> <span class=”fn”>Alex Faaborg</span> <div class=”org”>Mozilla</div> <div class=”adr”> <div class=”street-address”>1981 Landings Drive, Building K</div> <span class=”locality”>Mountain View</span>, <span class=”region”>CA</span>, <span class=”postal-code”>94043</span> <span class=”country-name”>United States</span> </div> <div class=”tel”>617-899-5064</div> But this isn’t just normal HTML, it is semantic HTML: This additional semantic information in the HTML is an example of the hCard microformat. Why are microformats important? 1. 2. 3. What sites are currently using microformats? 4.

Documents numériques et métadonnées A la BnF, la numérisation est considérée depuis l'origine (début des années 1990) comme une technique de reproduction et de conservation à part entière des documents. Les choix de formats, de résolution, de prise de vue reflètent ce principe. La numérisation en mode image continue d’être une priorité car elle permet de proposer aux utilisateurs une reproduction fidèle du document original. Dès lors, la structure et l’organisation du document numérique sont traités selon des méthodes précises afin d’en assurer à la fois la communication et la conservation. Un document numérique est une suite de fichiers sans lien entre eux, décrit par un identifiant unique englobant un ensemble de métadonnées : Métadonnées Une métadonnée est un ensemble structuré d'informations décrivant une ressource quelconque. Le schéma XML refNum décrit les données descriptives et techniques associées au document numérisé. Le schéma comprend 3 grandes parties : METS = Metadat Encoding and Transmission Standard

Tools This page gives an overview of software tools related to the Semantic Web or to semantic technologies in general. Due to the large amount of tools being created in the community, this site is always somewhat outdated. Contributions and updates are welcomed. See also: Tool Chains Adding your own Adding your own tool is as easy as creating a page. Do not forget to use a suitable category to classify the tool, otherwise it will not appear below. If your tool is an OWL 2 implementation or a RIF implementation not yet listed here, please consider to add it. Current tools on semanticweb.org.edu The following tools are currently recorded in this wiki. RDF2Go (Version 4.8.3, 4 June 2013) Bigdata (Version 1.2.3, 31 May 2013) Semantic Measures Library (Version 0.0.5, 4 April 2013) HermiT (Version 1.3.7, 25 March 2013) Fluent Editor (Version 2.2.2, 20 March 2013) The following is a list of all tools currently known (use the icons in the table header to sort by any particular column)

63 EdTech Resources You May Have Missed–Treasure Chest, Feb. 6, 2011 Here is this week’s edition of Treasure Chest–63 EdTech Resources You May Have Missed. I know, that’s a lot! What’s funny though is that I thought this week would bring my fewest number of resources. Due to time constraints this week, I didn’t think I was doing a very good job of curating resources. I would also like to thank Larry Ferlazzo for including this blog in his “The Best Blogs…” category. “Tech The Plunge” Is A Blog Worth Reading | Larry Ferlazzo’s Websites of the Day…–Tech The Plunge is an excellent resource-sharing blog written by Jeff Thomas. On with the show! Featured Tom Barrett’s Interesting Ways Series—All in One Location!!! Tools How-To iPad, iPod, etc. My Favorite iPad Apps–If you are curious to know what apps have I installed on my iPad, check these interactive screenshots for a complete list of my favorite iPad apps.smartr for iPhone, iPod touch, and iPad on the iTunes App Store–Smartr’s the fastest way to get your news on Twitter. Miscellaneous Videos Teach Different

Automatic Content Extraction Automatic Content Extraction (ACE) is a program for developing advanced Information extraction technologies. Given a text in natural language, the ACE challenge is to detect: entities mentioned in the text, such as: persons, organizations, locations, facilities, weapons, vehicles, and geo-political entities.relations between entities, such as: person A is the manager of company B. Relation types include: role, part, located, near, and social.events mentioned in the text, such as: interaction, movement, transfer, creation and destruction. This program began with a pilot study in 1999. While the ACE program is directed toward extraction of information from audio and image sources in addition to pure text, the research effort is restricted to information extraction from text. The program relates to English, Arabic and Chinese texts. The effort involves: In general objective, the ACE program is motivated by and addresses the same issues as the MUC program that preceded it. References[edit]

Web Ontology Language Un article de Wikipédia, l'encyclopédie libre. Pour les articles homonymes, voir OWL. Le langage OWL est basé sur les recherches effectuées dans le domaine de la logique de description. Une extension de RDF[modifier | modifier le code] En pratique, le langage OWL est conçu comme une extension de Resource Description Framework (RDF) et RDF Schema (RDFS) ; OWL est destiné à la description de classes au travers de caractéristiques des instances de cette classes et de types de propriétés. RDF permet par exemple de décrire que <Jean> est le père de <Paul>, au travers des individus <Jean>, <Paul>, et de la relation est le père de. Les trois niveaux d'OWL[modifier | modifier le code] OWL permet, grâce à sa sémantique formelle basée sur une fondation logique largement étudiée, de définir des associations plus complexes des ressources ainsi que les propriétés de leurs classes respectives. OWL-Lite[modifier | modifier le code] OWL-Lite est la version la plus simple du langage OWL. .

Indexation de ressources : métadonnées, normes et standards Enseigner avec le numérique. éduscol : l'actualité du numérique Recherche avancée… Retrouvez toute l'information sur education.gouv.fr Navigation ATTENTION : ces archives ne sont plus tenues à jour, des liens peuvent être brisés. Objectifs du dossier Le dossier précise les concepts et montre l'intérêt des métadonnées, des normes et des standards en général. 1. GlossaireBibliographie-Webographie Participez à l'enquête sur le projet d'indexation des ressources académiques ! Dossier réalisé par le centre de documentation TICE (initialement en 2002) (dernière mise en ligne : 16/04/2010) Contact [Voir l'ebook en entier] Introduction Présentation du dossier Quoi de neuf ? 1. De quoi s'agit-il ? Dans quel but ? 2. De quoi s'agit-il ? Quels sont les acteurs ? 3. Pourquoi et comment normaliser ? Principaux acteurs LOMFR (Learning Object Metadata) VocabNomen Archives ouvertes - OAI CDMFR (Course Description Metadata) Sites utiles Glossaire Sigles et acronymes Quelques définitions Bibliographie Séminaire SDTICE 2007

Related:  zone a trier3OERtaxonomywikitext analises