Contributing Blogs. AEA - American Evaluation Association : Blogs. Evaluators are blogging!
Some focus on evaluation, some do not. Look for the to identify AEA members. If you are a blogger, or on Twitter, send an email to email@example.com to have your blog or twitter name considered for adding to this page. AEA Blogs. Genuine Evaluation. Blog. Ann's Blog. Evergreen Data. The Wrong Kind of White Space Long-ish reports are probably never going away entirely, so let’s make them suitable for a digital reading age.
In the olden days, when we printed reports, they often had extra blank pages at the front and back. It probably gave printed materials a sense of... RTP Evaluators. Blog. Just a couple more stops until the Death Star.
@sean_voegeli Many of our employees live a long way from the office. One of our designers has a particularly hellish trek. Instead of whining about it — or just staring at his phone like the rest of us — he started making it a #creativecommute. Sean Voegeli* is a talented illustrator, avid Instagrammer, and loyal Star Wars fan. First, where can we find your work? What tools do you use? I like Adobe Draw a lot. How has your #creativecommute affected your day? On the way home, it’s a good way to unwind and transition into dad-mode. How do you decide what to draw?
Which comes first, illustration or photo? Informing Change. Designing an Evaluation Experience. Think Practice Impact. Making Program Evaluation Fun (and Sound fun) Data Driven Nonprofit Blog — Evaluate for Change. On December 10th, 2014, the Obama administration announced that one billion dollars in funding from a combination of private and public sources will be dedicated to expanding early childhood education programs.
The logic behind this large financial commitment is driven largely by a number of research studies demonstrating the benefits early childhood education has on later educational outcomes. This story is one example of how data can drive policy recommendations and affect change. iEval: Useful Evaluation. BLOG - Visual Brains. The Participation X-ray Hi everyone!
Like many of you I’m starting to prepare for AEA’s annual conference next week in Denver, where I will be presenting an innovative poster called: “The Meta-evaluation Dashboard” – a Dashboard that helps us unravelling the methodology and decisions made by the evaluators. It is also a valuable tool for Evaluation quality assurance. I would warmly like to invite you to stop by during the poster session (next Wednesday 15th from 19:00 to 20:30) so you can take a look and we can discuss it and you give... Evaluation is an Everyday Activity » Program Evaluation Discussions. A reader asked how to choose a sample for a survey.
Good question. My daughters are both taking statistics (one in college, one in high school) and this question has been mentioned more than once. So I’ll give you my take on sampling. There are a lot of resources out there (you know, references and other sources). My favorite is in Dillman 3rd edition, page 57. Sampling is easier than most folks make it out to be. You are dealing with an entire population when you survey the audience of a workshop (population 20, or 30, or 50). Now if your workshop is a repeating event with different folks over the offerings, then you will have the opportunity to sample your population because it is over 100 (see Dillman, 3rd edition, page 57).
Evaluation Diva (AKA: Dr. Jenn)'s Blog. PDA News. Written by Emily Subialka Nowariak Data visualization is a hot topic in the field of evaluation and one we take very seriously here at PDA.
After all, if our clients and other program stakeholders aren’t able to make sense of our reports, then we aren’t holding up our end of the deal. Last year we started holding informal data viz meetings once a month over the lunch hour. At these meetings we usually discuss data viz tools, examples of good (and sometimes bad) data visualizations we’ve come across or created ourselves, and perhaps most importantly, any data visualization challenges we are having with our own work. For example, one of us will often present some drafted graphs or other visual displays that would benefit from improvement and feedback from the group.
An example. Becky incorporated this feedback into her table and after a few finishing touches, transformed her large, complex table into the following easy to understand visual. Eval Central — Platform for Evaluation Expertise. Actionable data. Freshspectrum — Rejecting the null: a cartoon blog for alternatives. Building Capacity Together. On December 16, 2013, I embarked on a planning exercise for Strong Roots Consulting, specifically in the areas of mission/vision/goals, business practices, and activities for the year ahead.
Today marks the one-month point of that process, and with it an update. If I had followed the traditional New Year’s approach, I should have spent the past couple of weeks thinking about what I want to accomplish with Strong Roots Consulting in 2014. Instead, my mind has been on the bigger picture: what is my vision of success, and what is the mission that Strong Roots is working to follow? The former is by its nature more blue-sky thinking – abstract and long term – while the latter starts bringing the ideals back to earth. I think I found a good balance: Vision: Changemakers – including individuals, community groups, non-profit organizations, charities, social enterprises, and social-purpose businesses – have the resources, knowledge, skills, and courage to work together and better our world.
Evaluation Community Blog. Monica Hargraves Manager of Evaluation for Extension and Outreach Cornell Office for Research on Evaluationmjh51@cornell.edu The Evaluation Purpose Statement and especially the Evaluation Questions described in the previous post are absolutely essential for guiding an evaluation.
Ironically they are often somewhat unfamiliar to educators – but skipping them invites exactly the kind of data surprises and disappointments that can be so frustrating. This week’s post focuses on how to refine the Evaluation Questions in order to really keep your evaluation on target An important first step is to get attuned to how much of a difference wording makes. Barrington Research Group, Inc. #17 Bloom & Me: Beyond the Data Mindset Written by Gail Vallance Barrington The recent focus on data visualization has generated a lot of buzz about how we present evaluation findings.
Knowledge Advisory Group. Intelligent measurement. Sustainable Human & Social Development. Evaluspheric Perceptions. "So What" Your Weekly Guide to Advocacy with Impact. Evalû. In our last post, we introduced our primary data collection tool, EpiSuryeyor, developed by the social enterprise called DataDyne. EpiSurveyor is critical to our work because we rely on it for fast, reliable, and easy mobile data collection in all our evaluations.
Because we work largely on international development projects, simplicity and mobility are two critical elements of our success, and EpiSurveyor has met both needs time and time again. Given EpiSurveyor’s importance to our work, we wanted to spotlight two (out of a total staff size of nine) key members of the DataDyne team. In this first post, we’ll meet Dr. Fatima, our evaluation specialist, met with Joel in DC and asked about his big ideas for the future of EpiSuryeyor (soon to be Magpi) and mobile technology. On Top Of The Box Evaluation. Empowerment Evaluation. Cultural Defective + Cultural Detective = Cultural Effective. The Evaluation Forum. Research + evaluation. Seth's Blog.