French documentation

5 Minute Overview of Pentaho Business Analytics
Mondrian - Interactive Statistical Data Visualization in JAVA
MESI
Many Eyes Try out the newest version of IBM Many Eyes! New site design and layout Find visualization by category and industry New visualization expertise and thought leadership section Expertise on the Expert Eyes blog Learn best practices to create beautiful, effective visualizations New, innovative visualizations from the visualizations experts of IBM Research New visualization options

Many Eyes

Data warehouse

Data warehouse Data Warehouse Overview In computing, a data warehouse (DW, DWH), or an enterprise data warehouse (EDW), is a database used for reporting and data analysis. Integrating data from one or more disparate sources creates a central repository of data, a data warehouse (DW). Data warehouses store current and historical data and are used for creating trending reports for senior management reporting such as annual and quarterly comparisons.
Entity–attribute–value model (EAV) is a data model to describe entities where the number of attributes (properties, parameters) that can be used to describe them is potentially vast, but the number that will actually apply to a given entity is relatively modest. In mathematics, this model is known as a sparse matrix. EAV is also known as object–attribute–value model, vertical database model and open schema. Entity–attribute–value model Entity–attribute–value model
OpenReports OpenReports OpenReports is a powerful, flexible, and easy to use open source web reporting solution that provides browser based, parameter driven, dynamic report generation and flexible report scheduling capabilities. OpenReports supports a variety of open source reporting engines, including JasperReports, JFreeReport, JXLS, and Eclipse BIRT, to provide support for a wide range of reporting requirements and capabilities. OpenReports also includes QueryReports and ChartReports, easy to create SQL based reports that do not require a predefined report definition.
JasperReports It can be used in Java-enabled applications, including Java EE or web applications, to generate dynamic content. It reads its instructions from an XML or .jasper file. JasperReports is part of the Lisog open source stack initiative. JasperReports

Public Data Explorer

Public Data Explorer Indicadores sobre Desarrollo Humano Informe sobre Desarrollo Humano 2013, Programa de las Naciones Unidas para el Desarrollo Los datos empleados para calcular el Índice de Desarrollo Humano (IDH), y los otros indicadores compuestos que se publican en el Informe Sobre Desarrollo ... Desempleo en Europa (mensual)
DSPL Tutorial - DSPL: Dataset Publishing Language - Google Code DSPL stands for Dataset Publishing Language. Datasets described in DSPL can be imported into the Google Public Data Explorer, a tool that allows for rich, visual exploration of the data. Note: To upload data to Google Public Data using the Public Data upload tool, you must have a Google Account. This tutorial provides a step-by-step example of how to prepare a basic DSPL dataset. A DSPL dataset is a bundle that contains an XML file and a set of CSV files. DSPL Tutorial - DSPL: Dataset Publishing Language - Google Code
Wednesday, November 13, 2013 Deloitte Introduces Revenue IntellectTM to help health care providers harness data to obtain uncommon insights and improve revenue cycle performance Find out more Wednesday, October 16, 2013 Deloitte Launches PopulationMiner to Deliver Health System Insights Based upon Next Generation Real World Evidence Platform Find out more Monday, October 29, 2012 Deloitte Acquires Fast-Growing Healthcare Data Warehousing and Analytics Firm, Recombinant Find out more White Paper Recombinant Data Corp. - Healthcare Data Warehousing Recombinant Data Corp. - Healthcare Data Warehousing
03. Hello World Example - Pentaho Wiki 03. Hello World Example - Pentaho Wiki Although this will be a simple example, it will introduce you to some of the fundamentals of PDI: Working with the Spoon tool Transformations Steps and Hops Predefined variables Previewing and Executing from Spoon Executing Transformations from a terminal window with the Pan tool. Overview Let's suppose that you have a CSV file containing a list of people, and want to create an XML file containing greetings for each of them. If this were the content of your CSV file:
Loop over fields in a MySQL table to generate csv files
Email When doing ETL work every now and then the exact SQL query you want to execute depends on some input parameters determined at runtime. This requirement comes up most frequently when SELECTing data. This article shows the techniques you can employ with the “Table Input” step in PDI to make it execute dynamic or parametrized queries. The samples you can get in the downloads section are self-contained and they use an in-memory database, so they work out of the box. Just download and run the samples. Dynamic SQL Queries in PDI a.k.a. Kettle | Adventures with Open Source BI Dynamic SQL Queries in PDI a.k.a. Kettle | Adventures with Open Source BI
Slowly changing dimension For example, you may have a dimension in your database that tracks the sales records of your company's salespeople. Creating sales reports seems simple enough, until a salesperson is transferred from one regional office to another. How do you record such a change in your sales dimension?
Power Your Decisions With SAP Crystal Solutions
The ETL (Extract, Transform, Load) process is comprised of several steps and its architecture depends on the specific data warehouse system. In this post, an outline of the process will be given along with choices that are/could be used for OpenMRS. Data sources, staging area and data targets Data sources: The only data source for the moment is the OpenMRS database.Staging area: This refers to an intermediate area between the source database and the DW database. OpenMRS: ETL/Data Warehouse/Reporting
ETL Process The ETL (Extract, Transform, Load) process is comprised of several steps and its architecture depends on the specific data warehouse system. In this post, an outline of the process will be given along with choices that are/could be used for OpenMRS. Data sources, staging area and data targets Data sources: The only data source for the moment is the OpenMRS database.Staging area: This refers to an intermediate area between the source database and the DW database. This is where the extracted data from the source systems are stored and manipulated through transformations. At this time, there is no need for a sophisticated staging area, other than a few independent tables (called orphans), which are stored in the DW database.Data Targets: The DW database.
Why would we want to build a data warehouse system? We might consider doing this for some of the following reasons: An overview of the data warehouse How can the above requirements be met? What are the main components of such a system? Another approach for reporting: A Data Warehouse System
DW Data Model This post is going to describe the data model for the OpenMRS data warehouse. It will be edited frequently to add documentation for the model and to modify it. Star Schemas
Building Reports (Step By Step Guide) - Documentation - OpenMRS Wiki
openmrs-reporting-etl-olap - A data warehouse system for OpenMRS, based on other open source projects.
Pentaho and OpenMRS Integration
Pentaho ETL and Designs for Dimensional Modeling (Design Page, R&D) - Projects - OpenMRS Wiki
Cohort Queries as a Pentaho Reporting Data Source - Projects - OpenMRS Wiki
Concept Dictionary Creation and Maintenance Under Resource Constraints: Lessons from the AMPATH Medical Record System
Welcome to Apelon DTS
OpenMRS
Advanced Concept Management at OpenMRS
OpenMRS Database Schema
Main Page - MaternalConceptLab