background preloader

CommonCrawl

CommonCrawl

BayesiaLab 5.1: Analytics, Data Mining, Modeling and Simulation BayesiaLab raises the benchmark in the field of analytics and data mining. The improvements range from small practical features to entirely new visualization techniques that can transform your understanding of complex problems. Bayesia starts off 2013 with countless innovations in the newly-released BayesiaLab 5.1. Here is a small selection of the features that have been introduced in version 5.1: A Comprehensive Mapping Tool offering an entirely new way to visualize and analyze networks Occurrence Analysis for diagnosing sparse conditional probability tables Binary Clustering and Multiple Clustering to create latent variables with logical expressions Enhanced Resource Allocation Optimization and Target Optimization Design of Experiments Tool for generating questionnaires A new Radial Layout, plus the world's first Distance Mapping layout based on Mutual Information Box Plots for analyzing the distributions of numerical variables

Solving Today’s Biggest Problems Requires an Entirely New Approach to Data Old Data is inaccessible to most New Data available to all users Specialized skills in math and computer science. Any business user, scientist, researcher or domain expert. Dashboards and Charts Breakthrough outcomes Operational and business intelligence. Topological networks show hidden insights.

Les outils manquants de l'OpenData, dans avenir Réflexions au sujet des outils de l'OpenData, entamés lors de la préparation de mon intervention à l'événement L'OpenData et nous, et nous, et nous ?, davantage axées sur le point de vue du développeur et sur ce qu'il serait intéressant de faire au niveau technique. Le GoogHub de la donnée La décentralisation nécessite d'avoir un index centralisé, que ce soit Google pour le Web de documents ou GitHub pour les DCVS il faut un endroit où l'on puisse chercher parmi les sources, toujours plus nombreuses. Un service est nécessaire pour indexer le Web des données, informer sur le versionnement et la fraîcheur des données, voire peut-être servir de proxy à une partie de ces données. Idéalement, dans un Web de données liées, un tel index serait moins utile car il suffirait de suivre les liens mais force est de constater que l'on en est aux données ouvertes et pas très liées. Des frameworks d'exploitation Une plateforme de monétisation

Processing Qualitative Research Data With Tinderbox I wrote a while back that I often use a piece of software for the Mac called Tinderbox to churn through messy, unstructured focus group data and see the meaning and inherent structure in a soup of qualitative data. I was fortunate to be asked to present my method at a Tinderbox Weekend last November by Tinderbox auteur Mark Bernstein. It's a complicated process at the start, but once it's set up correctly you can zip through qualitative research data pretty quickly and develop structure in the process. Mark (and Eastgate's Stacy Mason) have been noodging me to make a screencast of this process, and I've finally gotten around to doing just that.

Opendata & Quality Cela fait un tour de temps que je navigue et observe ce qui est mis en ligne sous le nom d’Opendata. Bien sûr, ce sont des données, bien sûr elles sont mises à disposition, bien sûr il y a souvent une fiche de méta données plus ou moins complètes, et il y a même des portails qui s’organisent pour les mettre en catalogue … bref ce sont là des ingrédients qui disent que ce sont bien des données publiques répondant aux exigences d’un cahier des charges. Mais justement, parlons un peu de ce cahier des charges. Il y a comme une partie importante du problème qui est oubliée. Le jeu de données, le dataset, doit être intrinsèquement de qualité et cette qualité semble ne pas être clairement définie. Aujourd’hui, le dataset est de mieux en mieux défini extérieurement. Par exemple, un fichier produit par un traitement de textes a peu de chance de servir à quelque chose dans un dispositif de traitement automatique sauf si on a déjà l’application faite juste pour ce fichier.

The Personal Wiki System ConnectedText is used in a variety of ways and in many contexts. I am always surprised to hear how other people use it, and the way I use it will probably appear just surprising as their use of the program will be to me. This essay is just my attempt to show how and why I use it for my research. I do not want to suggest that my way is the only or perhaps even the best way of using it. I am a 60 years old academic teacher and I have been using ConnectedText exclusively since August 2005 to keep my research notes and other bits of information. Will Duquette's Notebook (from May 2003 to August 2007). [1] Wikit (from the end of 2002 until May 2003) [2] and between 1985 and the end of 2002: InfoHandler, Ecco, InfoSelect, Packrat, Agenda, and Scraps for DOS, as well as MS Word (in its many incarnations). [3] I also experimented with many other so-called "PIMS," databases and other programs that promised to be useful for keeping research notes, but never really committed to any others.

Open data 71 : un projet de qualité, mais des résultats en demi-teinte Focus Mardi 28 Aout 2012 En juin 2011, le Département de Saône-et-Loire a décidé d’ouvrir et de partager les données publiques dont il dispose par le biais du projet "Open data 71". Quelle est l’origine de l’ouverture des données en Saône–et-Loire ? La libération des données en Saône-et-Loire tient en premier lieu à une volonté politique très forte, liée à la commande du président du Conseil général, Arnaud Montebourg, aujourd’hui ministre du Redressement productif, et qui en a fait un projet important de démocratie. La particularité du projet mené a été de libérer tout ce que la loi nous autorise, sans se donner de limite dans la libération des données. L’originalité de notre solution tient également au concept que nous avons mis en avant, celui de faire de l’open data pour tous et pas seulement pour les développeurs ou les spécialistes de la donnée. La transparence n’est-elle pas que formelle quand l’ouverture n’est pas accompagnée d’une formation à destination des citoyens ? ShareThis

Introduction to R for Data Mining This on-demand webinar shows how to become immediately productive in R, and covers point-and-click data mining GUI rattle, command line data mining, and Big data mining with RevoScaleR Feb 14, 2013 Webinar, presented by Joseph Rickert, Technical Marketing Manager, Revolution Analytics In this webinar, we focus on data mining as the application area and show how anyone with just a basic knowledge of elementary data mining techniques can become immediately productive in R. Provide an orientation to R's data mining resources Show how to use the "point and click" open source data mining GUI, rattle, to perform the basic data mining functions of exploring and visualizing data, building classification models on training data sets, and using these models to classify new data. Data scientists and analysts using other statistical software as well as students who are new to data mining should come away with a plan for getting started with R. Here is the webinar replay and presentation. Read more.

Desktop Public Edition Compare the desktop editions of the Lavastorm Analytics Engine. Please ensure that your PC meets the following minimum requirements and has administrative privileges to your local machine: RAM - 2 GBHDD - Over 1 GBCPU - Dual Core 2 GHz x86 or x64 processor (Intel/AMD)O/S - Microsoft Windows® XP SP 3, Vista, or 7 (32 or 64-bit) Lavastorm Analytics Library Packs – Enhancements for Your Lavastorm Software The Lavastorm Analytics Library contains business controls (we call them nodes) with pre-built functions for Analytics, Data Acquisition, Correlation, Aggregation, Transformation, Reporting, Publishing, Logistics, Profiling and Patterns, Metadata and Structure, and Interfaces and Adapters. View all nodes and download the pack now.

Hosain Rahman's beautiful failure By Alex Konrad with Ryan Bradley FORTUNE -- The bracelet -- a half-inch wide and rubberized -- represented a decade's work, a thing so small and versatile it could both track its wearer's health, then actively work to improve it. In a sense, the UP was meant to become a part of its user's life, a part of his or her story. An example: The UP measures sleep cycles by keeping track of movements throughout the night, then vibrates its wearer awake after he or she has reached just the right amount of deep sleep. It was the culmination of a nearly decade-long collaboration between Hosain Rahman, co-founder and CEO of Jawbone, and Yves Behar, a gadget designer as famous as Apple's Jonathan Ive. Rahman and Behar had married software and product design to build wearable computing devices before—most successfully with their Bluetooth headsets, the name of which had become synonymous with the company—but never in something quite as ambitious. It was the first week in December 2011.

Guida EndNote Web Università degli Studi di Pavia. Sistema bibliotecario d'Ateneo. Tel. 0382.98.6923 Indice 1. 1.1 Cos'è EndNote Web1.2 Creare un proprio profilo2. 1. 1.1 Cos'è EndNote Web EndNote Web è un programma che permette di gestire bibliografie personali importando fino a 10.000 citazioni bibliografiche dai risultati delle ricerche effettuate nelle basi dati e nei cataloghi on-line. 1.2 Creare un proprio profilo L'accesso a EndNote Web è disponibile all'indirizzo www.myendnoteweb.com oppure dalla base dati Web of Knowledge (WOK), attraverso il bottone in alto My EndNote Web. [Indice] 2. 2.1 Importare citazioni da WOK Effettuata una ricerca su Web of knowledge (WOK) o Web of science (WOS), è possibile selezionare i risultati di proprio interesse e cliccare sul bottone Save to EndNote Web. Se è già stato effettuato l'accesso a EndNote Web le citazioni vengono inviate direttamente, altrimenti il programma chiede di autenticarsi per poter effettuare l'operazione. 2.2 Organizzare le citazioni importate 3.

Related: