background preloader

The inverted pyramid of data journalism

The inverted pyramid of data journalism
I’ve been working for some time on picking apart the many processes which make up what we call data journalism. Indeed, if you read the chapter on data journalism (blogged draft) in my Online Journalism Handbook, or seen me speak on the subject, you’ll have seen my previous diagram that tries to explain those processes. I’ve now revised that considerably, and what I’ve come up with bears some explanation. I’ve cheekily called it the inverted pyramid of data journalism, partly because it begins with a large amount of information which becomes increasingly focused as you drill down into it until you reach the point of communicating the results. What’s more, I’ve also sketched out a second diagram that breaks down how data journalism stories are communicated – an area which I think has so far not been very widely explored. But that’s for a future post. I’m hoping this will be helpful to those trying to get to grips with data, whether as journalists, developers or designers. Compile Clean

http://onlinejournalismblog.com/2011/07/07/the-inverted-pyramid-of-data-journalism/

Related:  marionnettisteWeb Articles

Data journalism at the Guardian: what is it and how do we do it? Data journalism. What is it and how is it changing? Photograph: Alamy Here's an interesting thing: data journalism is becoming part of the establishment. Not in an Oxbridge elite kind of way (although here's some data on that) but in the way it is becoming the industry standard.

Capturing Value: Data Journalism as a Revenue Supplement By Jake Batsell The byproducts of journalism rarely have value to anyone besides the reporters who gather and assemble the information. (Exhibit A: The troves of spiral notebooks, manila folders and microcassettes left over from my newspaper days, still gathering dust in my garage.) But more news organizations are discovering that cleaned-up, searchable databases have extra value beyond their journalistic utility — and, better yet, can generate revenue to support even more public-interest reporting. In February, ProPublica unveiled its Data Store, offering both free and premium data sets to journalists, researchers and corporate clients. Raw data obtained through public records requests can be downloaded gratis; premium data sets polished by ProPublica carry fees ranging from $200 to $10,000.

Which TV Shows Were the Most Social in June? [INFOGRAPHIC] Realtime social media tracker Trendrr has released an infographic detailing the biggest winners in the social TV space in June 2011. Culling data from the Trendrr.tv social TV index, the graphic breaks down the top broadcast, cable networks and TV shows, based on social interaction. From a programming standpoint, the two big winners in June 2011 were NBC's The Voice and the BET Awards. The BET Awards drew 1.4 million social impressions, which helped it rank first in cable programming and helped BET rank as the most social cable network. Likewise, NBC's big hit, The Voice, was not only the most social show on broadcast TV, it helped NBC maintain a sizable lead over other broadcast networks in terms of social engagement.

Kenya launches Africa's first Open Data Initiative - TNW Africa Kenya recently launched Sub-Saharan Africa’s first Open Data Initiative, and is one of a series of firsts for the East African country this year, which has included the launch of Africa’s first mobile apps lab back in June. The Kenya Open Data Initiative (KODI) was launched at a high profile event in Nairobi yesterday, with Kenya’s President Mwai Kibaki present as well as politicians, government officials and IT professionals. It was launched in partnership with organizations such as The World Bank, Ushahidi and the iHub. The initiative aims to make core government development, demographic, statistical and expenditure data available in a useful digital format for anyone to access. Currently there are over 160 datasets on the platform and already there have been some interesting applications of the datasets by a number of organizations.

Scraping for… by Paul Bradshaw Scraping - getting a computer to capture information from online sources - is one of the most powerful techniques for data-savvy journalists who want to get to the story first, or find exclusives that no one else has spotted. Faster than FOI and more detailed than advanced search techniques, scraping also allows you to grab data that organisations would rather you didn’t have - and put it into a form that allows you to get answers. Scraping for Journalists introduces you to a range of scraping techniques - from very simple scraping techniques which are no more complicated than a spreadsheet formula, to more complex challenges such as scraping databases or hundreds of documents. At every stage you'll see results - but you'll also be building towards more ambitious and powerful tools. You’ll be scraping within 5 minutes of reading the first chapter - but more importantly you'll be learning key principles and techniques for dealing with scraping problems.

MongoDB - Why Does NoSQL Matter? — DatabaseJournal.com - Iceweasel In recent years, the drumbeat of vendors proclaiming the ascendancy of NoSQL has become increasingly loud. One of the NoSQL vendors that is seeing business results from its NoSQL solution is 10gen, which is the lead commercial sponsor behind the open source MongoDB NoSQL database. "We're seeing the NoSQL space really taking off now and it's being used in a significant way by a lot of people, including a lot of large enterprises," Dwight Merriman, CEO and co-founder of 10gen, told InternetNews.com.

In the age of big data, data journalism has profound importance for society - Data The promise of data journalism was a strong theme throughout the National Institute for Computer-Assisted Reporting’s (NICAR) 2012 conference. In 2012, making sense of big data through narrative and context, particularly unstructured data, will be a central goal for data scientists around the world, whether they work in newsrooms, Wall Street or Silicon Valley. Notably, that goal will be substantially enabled by a growing set of common tools, whether they’re employed by government technologists opening Chicago, healthcare technologists or newsroom developers. At NICAR 2012, you could literally see the code underpinning the future of journalism written – or at least projected – on the walls. “The energy level was incredible,” said David Herzog, associate professor for print and digital news at the Missouri School of Journalism, in an email interview after NICAR. “I didn’t see participants wringing their hands and worrying about the future of journalism.

Ten ways journalists can use Google+ Since Google+ (plus) was launched a week ago those who have managed to get invites to the latest social network have been testing out circles, streams and trying to work out how it fits alongside Twitter, Facebook and LinkedIn. Here are 10 ways Google+ can be used for building contacts, news gathering and sharing: 1. As “a Facebook for your tweeps” This is how Allan Donald has described Google+ in an update.

A fundamental way newspaper sites need to change A blog entry titled 9 Ways for Newspapers to Improve Their Websites has been making the rounds lately. I don’t write about the online news industry on this site as much as I used to, but this article inspired me to collect my current thinking on what newspaper sites need to do. Here, I present my opinion of one fundamental change that needs to happen. Data driven journalism Data-driven journalism, often shortened to "ddj", is a term in use since 2009/2010, to describe a journalistic process based on analyzing and filtering large data sets for the purpose of creating a news story. Main drivers for this process are newly available resources such as "open source" software and "open data". This approach to journalism builds on older practices, most notably on CAR (acronym for "computer-assisted reporting") a label used mainly in the US for decades. Other labels for partially similar approaches are "precision journalism", based on a book by Philipp Meyer, published in 1972, where he advocated the use of techniques from social sciences in researching stories. Data-driven journalism has an even wider approach. As projects like the MP Expense Scandal (2009) and the 2013 release of the "Offshore leaks" demonstrate, data-driven journalism can assume an investigative role, dealing with "not-so open" aka secret data on occasion.

TimeTracker / Usage - Iceweasel tracker is the front end script to App::TimeTracker. tracker allows you to easily track and report the time you spend on various jobs, projects, tasks etc from the commandline. Custom commands or adaptions to your workflow can be implemented via an "interesting" set of Moose-powered plug-ins. You can configure different sets of plug-ins for different jobs or projects. Tip: Use tracker plugins to list all installed plugins. Read more about each plug-in in App::TimeTracker::Command::PLUGIN-NAME. Initial Setup Recalculating the newsroom: The rise of the journo-coder? Credit: Image by Arbron on Flickr. Some rights reserved This is an edited version of a chapter fromData Journalism: Mapping the Future, being launched tomorrow by Abramis academic publishing, republished with kind permission. Data journalism: Mapping the future (RRP £15.95) is available at a reduced rate of £12 for Journalism.co.uk readers.

‘Dissidents’ in Facebook threat to shoot youths By Patrice Dougan – 09 July 2011 The Facebook page has posted photographs and names of north Belfast youths, calling them “scum” and threatening to shoot them. It makes a number of allegations about young men in the Ardoyne area, branding them thieves and burglars. And the web page has already gathered a number of followers – with 148 people listed as ‘friends’. Visualize This: How to Tell Stories with Data Data visualization is a frequent fixation around here and, just recently, we looked at 7 essential books that explore the discipline’s capacity for creative storytelling. Today, a highly anticipated new book joins their ranks —Visualize This: The FlowingData Guide to Design, Visualization, and Statistics, penned by Nathan Yau of the fantastic FlowingData blog (Which also makes this a fine addition to our running list of blog-turned-book success stories). Yu offers a practical guide to creating data graphics that mean something, that captivate and illuminate and tell stories of what matters — a pinnacle of the discipline’s sensemaking potential in a world of ever-increasing information overload. From asking the right questions to exploring data through the visual metaphors that make the most sense to seeing data in new ways and gleaning from it the stories that beg to be told, the book offers a brilliant blueprint to practical eloquence in this emerging visual language.

Related: