background preloader

Applications

Facebook Twitter

Twine - Organize, Share, Discover Information Around Your Intere. Semantics Incorporated: Twine in Freefall? Semantics Incorporated: Twine Confirms Traffic Drop on ReadWrite. Twine: The First Mainstream Semantic Web App? On Friday Radar Networks is announcing a new Semantic Web application called Twine. Founder Nova Spivack showed me a demo today of the new app, which he described as a "knowledge networking" application.

It has aspects of social networking, wikis, blogging, knowledge management systems - but its defining feature is that it's built with Semantic Web technologies. Spivack told me that Twine aims to bring a usable and scalable interface to the long-promised dream of the Semantic Web. Spivack went as far as to claim that Twine will be "the first mainstream Semantic Web application" - and it's certainly fair to say that we've heard lots of theory about the Semantic Web ever since Tim Berners-Lee defined it, but as yet there have been very few large scale success stories (if any).

Will Twine finally be the Semantic Web app that breaks through? Let's find out more... What is Twine? The aim of Twine is to enable people to share knowledge and information. Semantic Graph Who will use it? Conclusion. Twine (website) Twine is an online, social web service for information storage, authoring and discovery. Created by Radar Networks, the service was announced on October 19, 2007 and made open to the public on October 21, 2008.[1] On March 11, 2010, Radar Networks was acquired by Evri Inc. along with Twine.com[2] and since May 14, twine.com re-directs to evri.com.

Twine combines features of forums, wikis, online databases and newsgroups[3] and employs intelligent software to automatically mine and store data relationships[4] expressed using RDF statements. Twine is a social network and its users can add contacts, send private messages and share information. Users can collaborate on collecting data through private or public twines; data collections focused on a certain topic, such as politics.[8] Data can be imported to Twine's website through conventional uploading of files, writing text with a WYSIWYG editor or using a bookmarking tool for webpages.

What is Web 3.0? It’s Web 2.0 with a brain. This week, we sat down with Nova Spivack (pictured left), the co-founder and CEO of Radar Networks. The company launched its first product, Twine, on Friday. Twine is a tool that intelligently collects and organizes information like documents or web pages for professionals. It may represent the next generation of web applications, which some people have dubbed Web 3.0. Our full coverage is here. This is an edited transcript of our interview, which contains a more theoretical discussion of Spivack’s view of the internet’s future: VB: We’re hearing all about the semantic web and Web 3.0 now.

I think we’re on the cusp of it, but the main part of Web 3.0 will be this coming decade. VB: So you don’t think we’re already moving into the next stage of the internet? People are throwing the term semantic around because it’s in vogue. VB: So how should investors find their way forward? Right now, the semantic web is a sector. Today, applications can render data because they have the browser. Web 2.0 avec un cerveau, qui suis-je ? Comme je vous l’avais promis , je vais vous parlez le plus régulièrement possible de Twine , et ce n’est pas chose facile ! Radarnetworks a réussi à garder le mystère autour de leur produit pendant de nombreuses années, ils ne sont pas du genre à communiquer à tout va. Je crois qu’ils sont plutôt occupés à améliorer leur produit. De plus je suis effaré par le désintéressement général de ce qui pourrait être la première application Web 3.0.

Peu de commentaires sur la blogosphère anglaise et aucun sur la scène française. Je croyais pourtant que techcrunch s’intéressait au web sémantique après leur post sur les microformats ? Suite à la sortie de Twine je n’ai même pas trouvé d’interview de son CEO Nova Spivack. Ah si en voila une , et pas n’importe laquelle ! Suite à une petite question sur l’évolution du web: Le web sémantique est seulement une pièce de ce que l’on appelle le Web 3.0. Nova estime en effet que le Web 3.0 devrait commencer à émerger clairement en 2010. . [ . . . ] [ . . . ] [...] Swirrl: Online Database Software. Swirrl: Newly Launched Semantic Web Wiki. Swirrl is a wiki-like application that was built using Semantic Web technologies and launched as a beta last week. We heard about it in the comments to our post about the lack of commercial RDF applications on the Web.

As with most Semantic Web apps, it's a little difficult to describe what Swirrl is. On its homepage Swirrl is said to be "like a wiki, but better. " The further explanation is that it's a web application that "allows your team to store, share, edit and analyze information. " Basically its a data collaboration app. This hybrid wiki/office functionality is reminiscent of JotSpot (which was acquired by Google in Oct 2006 and eventually morphed into Google Sites) and Dabble DB (a similarly hard to describe amalgam of wiki/spreadsheet/database). Swirrl is focused on business use, rather than consumer use.

Company rep Bill Roberts explained the purpose of Swirrl: Roberts went on to outline how Swirrl is using RDF to achieve this type of "middle ground" business collaboration: Home. Open Calais : Reuters nous rapproche du Web sémantique. Reuters Wants The World To Be Tagged. As Richard MacManus recently predicted, in 2008 we'll witness the rise of semantic web services. From the native support for Microformats in Firefox 3, to the New York Times' utilization of rich headers metadata, to this week's release of the Social Graph API by Google, semantics are starting to slip onto the web. The impact is being felt because large companies are really starting to focus on structured information. In the same vein, last week Reuters - an international business and financial news giant - launched an API called Open Calais. The API does a semantic markup on unstructured HTML documents - recognizing people, places, companies, and events.

This technology is the next generation of the Clear Forest offering, which Reuters acquired last year. Open Calais API Basics The idea behind Calais is simple - identify interesting bits into metadata in documents. For any document submitted into Calais, entities are identified, extracted and annotated. What is Calais Good For? Conclusion. Having fun with Reuters Calais.

Calais is the Reuters Clearforest product that, according to their homepage, "automatically annotates your content with rich semantic metadata". Give it text, and it returns the text marked up with RDF that identifies entities and various semantic information about those entities. I'm looking forward to Reuters Clearforest CEO Barak Pridor's talk Enabling Semantic Applications Through Calais at the Linked Data Planet conference (his Talking with Talis podcast interview is definitely worth listening to), and I thought I'd get more out of his talk if I played with the software a bit first. It was easy and straightforward, but before I describe my first experiment, I wanted to mention two important points: For work-related projects, I've been researching machine-aided indexing tools and similar software on and off, and they usually look complex and expensive. GATE and UIMA look tantalizing, but these are not tools; they're frameworks into which you can plug such tools.

<! True Knowledge. Talis. The online identity experts. DataMasher. The Landscape of music « Visualizing Music. Rhiza Labs | Software for exploring, visualizing, and sharing th. Freebase : le wikipedia sémantique et intègrable | Freebase, wik. Did Google Just Expose Semantic Data in Search Results? In what appears to us to be a new addition to many Google search results pages, queries about birth dates, family connections and other information are now being responded to with explicitly semantic structured information.

Who is Bill Clinton's wife? What's the capital city of Oregon? What is Britney Spears' mother's name? The answers to these and other factual questions are now displayed above natural search results in Google and the information is structured in the traditional subject-predicate-object format, or "triples," of semantic web parlance. The answers aren't found structured that way on the web pages they come from - Google appears to be parsing the semantic structure from semi or unstructured data. That's something Microsoft paid over $100 million to try to do this summer when it acquired Powerset. It appears that the feature isn't being bucket tested, either, it is globally available. Is Google Creating Structured Data Where There Was None Before? Why is This Important? Google se met-il au web sémantique ? Google and RDFa: what and why. Surprise—to make more money! After the initial burst of discussion about Google putting their toe into the standardized metadata water, I started wondering about the corner of the pool they had chosen.

They're not ready to start parsing any old RDFa; they'll be looking for RDFa that uses the vocabulary they somewhat hastily defined for the purpose. Why does the vocabulary define the properties that it defines? The People properties sound basic enough, although as all the semweb geeks have already tweeted, Google should have leveraged the extensive existing work done on the FOAF vocabulary for that. The other three categories of properties they define are Reviews, Products, and Businesses and organizations. Of all the knowledge domains to represent, why these? In the words of Drupal project lead Dries Buytaert, "Structured data is the new search engine optimization". It will be interesting to see how the big hustling SEO world adapts to this. Bug sémantique de google! Wolfram|Alpha. WolframAlpha. Un article de Wikipédia, l'encyclopédie libre.

Pour les articles homonymes, voir Wolfram. Wolfram|Alpha (aussi écrit WolframAlpha lorsque Wolfram et Alpha sont dans deux couleurs distinctes) est un outil de calcul en langage naturel développé par la société internationale Wolfram Research. Il s'agit d'un service internet qui répond directement à la saisie de questions factuelles en anglais par le calcul de la réponse à partir d'une base de données, au lieu de procurer une liste de documents ou de pages web pouvant contenir la réponse.

Son lancement a été annoncé en mars 2009 par le physicien et mathématicien britannique Stephen Wolfram et il a été lancé le 16 mai 2009 à 3 h 00 du matin. Wolfram|Alpha contient environ 10 milliards d'informations, plus de 50 000 types d'algorithmes et de modèles, et des capacités linguistiques pour plus de 1 000 domaines[1]. Utilisation[modifier | modifier le code] Les utilisateurs saisissent une question ou une demande de calcul. Bing intègre Wolfram Alpha dans ses résultats - Journal du Net > Bintro - Home. Applications of Semantic Web Methodologies and Techniques to Soc.

"Technologies du Web Sémantique pour l'Entreprise 2.0": Thèse et. MOAT.