FrontPage - Open Knowledge Definition - Defining the Open in Open Data, Open Content and Open Information Gapminder: Unveiling the beauty of statistics for a fact based w freerisk What is Freerisk? Freerisk is a project with the goal of making freely available the data, algorithms and tools necessary to perform financial modeling. Although much important data is accessible from government agencies, it is neither well-integrated nor in a machine-readable format. Why is it needed? We believe that greater transparency and diversity of ideas is the key to allowing continued innovation in finance while reducing the risk of crises. Traditional data and model providers require contracts and usually do not allow data to be republished in machine-readable formats. So what are you building? There are many components to Freerisk, some available today and others still in development. How do you hope this will be used? The purpose of creating an open system is that it will be used in creative ways that you didn't anticipate. If you have more questions, you can contact us. Copyright 2009.
Public Data Sets on Amazon Web Services (AWS) Click here for the detailed list of available data sets. Here are some examples of popular Public Data Sets: NASA NEX: A collection of Earth science data sets maintained by NASA, including climate change projections and satellite images of the Earth's surfaceCommon Crawl Corpus: A corpus of web crawl data composed of over 5 billion web pages1000 Genomes Project: A detailed map of human genetic variation Google Books Ngrams: A data set containing Google Books n-gram corpusesUS Census Data: US demographic data from 1980, 1990, and 2000 US CensusesFreebase Data Dump: A data dump of all the current facts and assertions in the Freebase system, an open database covering millions of topics The data sets are hosted in two possible formats: Amazon Elastic Block Store (Amazon EBS) snapshots and/or Amazon Simple Storage Service (Amazon S3) buckets. If you have any questions or want to participate in our Public Data Sets community, please visit our Public Data Sets forum .
The State Decoded, Now Solr-Powered The State Decoded project is putting U.S. state laws online, making them easy to search, understand and navigate. Our laws are organized badly, but The State Decoded is reorganizing them automatically, connecting people with the legal information they need with the ease of a Google search. In implementing many of the features necessary to provide this experience, it would be easy to try to reinvent the wheel. At its core, all of this is about the same thing: analyzing a series of texts and determining how they relate to one another. Many of these design patterns already exist in one piece of software: Solr. Solr is a natural for The State Decoded. The use of Solr has been tested out on Virginia Decoded, which is one of the state-level implementations of the State Decoded software. One feature that Solr provides out of the box is a concept of document relatedness. Another feature that Solr provides is the ability to respond to remote search queries.
New Bill Helps Expand Public Access to Scientific Knowledge Internet users around the world got a Valentine's Day present yesterday in the form of new legislation that requires U.S. government agencies to improve public access to federally funded research. The proposed mandate, called the Fair Access to Science & Technology Research Act, or FASTR (PDF), is simple. Agencies like the National Science Foundation, which invests millions of taxpayer dollars in scientific research every year, must design and implement a plan to facilitate public access to—and robust reuse of—the results of that investment. The contours of the plans are equally simple: researchers who receive funding from most federal agencies must submit a copy of any resulting journal articles to the funding agency, which will then make that research freely available to the world within six months. The proposed changes reflect but also improve upon National Institutes of Health’s public access policy. The bill isn't perfect.
Open Research Data Handbook – Call for case Studies The OKF Open Research Data Handbook – a collaborative and volunteer-led guide to Open Research Data practices – is beginning to take shape and we need you! We’re looking for case studies showing benefits from open research data: either researchers who have personal stories to share or people with relevant expertise willing to write short sections. Designed to provide an introduction to open research data, we’re looking to develop a resource that will explain what open research data actually is, the benefits of opening up research data, as well as the processes and tools which researchers need to do so, giving examples from different academic disciplines. Leading on from a couple of sprints, a few of us are in the process of collating the first few chapters, and we’ll be asking for comment on these soon. In the meantime, please provide us with case studies to include, or let us know if you are willing to contribute areas of expertise to this handbook. Link to form
Free ebooks - Project Gutenberg Internet Archive: Digital Library of Free Books, Movies, Music & Wayback Machine