background preloader

EDTECH 505

Facebook Twitter

Survey Software - Crosstabs Software - Online and Paper Surveys. Limesurvey. Analyse This!!! - introduction - data analysis quiz. Do you know the difference between qualitative and quantitative data analysis? Look at the following descriptions and try to identify which relates to quantitative data and which to qualitative data: Suggested scenarios – picture the scene Scenario 1: Most public libraries now offer computer facilities and training. You are interested in finding out how these facilities are being used and what people think of them. You have conducted some interviews and focus groups and now you are ready to do something with the responses. For this scenario the data you have collected is mainly qualitative, so have a look at the Qualitative section to see what to do next…..

Scenario 2: Most students will use the Internet to find information for themselves or for coursework. So, you've handed out a questionnaire and now you're ready to do something with the responses you've had back. OERL: Instruments. OERL's instrument collection is designed to provide models for the development of new instruments. Users are encouraged to use or adapt instruments, sections, or individual questions to create instruments that are tailored to their projects. We caution users, however, that sound instrument development requires further piloting of any instrument that has been adapted for use in a context that differs significantly from the context in which it was developed. The instruments were selected from data collection forms and protocols developed especially for evaluations of actual NSF-funded projects.

There are also some instruments from technology projects funded by the U.S. Department of Education. The instruments gather information that is not contained in data and indicators available from archival sources (such as registrar records of courses taken and grade-point averages). Also available are answers to frequently asked questions about instruments and a complete list of contributors. Mixed-Method Evaluations: Start. Directorate for Education and Human Resources Division of Research, Evaluation and Communication National Science Foundation The National Science Foundation (NSF) provides awards for research and education in the sciences and engineering.

The awardee is wholly responsible for the conduct of such research and preparation of the results for publication. NSF, therefore, does not assume responsibility for the research findings or their interpretation. NSF welcomes proposals from all qualified scientists and engineers and strongly encourages women, minorities, and persons with disabilities to compete fully in any of the research-related programs described here.

In accordance with federal statutes, regulations, and NSF policies, no person on grounds of race, color, age, sex, national origin, or disability shall be excluded from participation in, be denied the benefits of, or be subject to discrimination under any program or activity receiving financial assistance from NSF. Edited by August 1997. Program Evaluation Tutorial | OMERAD | College of Human Medicine | Michigan State University. Introduction STEP 1: Program Goal STEP 2: Program Objectives STEP 3: Program Description STEP 4: Evaluation Questions STEP 5: Sources of Evaluation Data STEP 6: Methods of Data Collection References & Resources About This Tutorial Why Evaluate Programs?

Two Goals of Evaluation How Do Program Planning and Evaluation Relate? Evaluation Planning Tool: PDF or WORD versions Example 1 Evaluation plan filled in through STEP 1 Example 2 Complete evaluation plan example. Evaluation. Revised UNODC Evaluation Policy published This document presents the United Nations Office on Drugs and Crime (UNODC) evaluation policy, as the set of principles and rules that guide the Organisation's decisions and actions when planning, conducting, disseminating and using evaluations. The policy has been consulted with the Executive Director, Senior Management of UNODC and Member States and fully replaces the previous evaluation policy.

This document provides staff, as well as Member States and external evaluators with information on international principles for evaluation, the role and application of evaluation in UNODC, as well as related mandates and resolutions. It serves as a frame, which is complemented by the Evaluation Handbook. This policy is informed by existing evaluation policies within the UN while meeting the specific needs of UNODC. UNODC Evaluation Policy (please click to download) UNODC Evaluation Meta Analysis 01/2011-12/2014 published Agenda Evaluation Newsletter.

Evaluation Tools. School-wide Evaluation Tool Implementation Manual The SET Implementation Manual was developed to provide guidance and technical assistance to those who would like to use the School-wide Evaluation Tool (SET) to assess a school’s fidelity of implementation of school-wide positive behavior support. Early Childhood System-wide Evaluation Tool The Systems-wide Evaluation Tool (SET) is designed to assess and evaluate the critical features of program-wide effective behavior support across each academic school year in early childhood settings. AIM: Assess-Intervene-Monitor FBA Tool This tool guides school teams through the process of identifying functions of the problem behavior, developing and planning intervention plans, and monitoring the effectiveness of the intervention. Benchmarks for Advanced Tiers (BAT) The BAT allows school teams to self-assess the implementation status of Tiers 2 (secondary, targeted) and 3 (tertiary, intensive) behavior support systems.

Behavior Support Plan Template. Evaluation Tools and Instruments. Most evaluations require the use of a data collection tool—a survey or other data collection instrument. Evaluators either need to adopt or adapt tools “off the shelf” or create new ones. Either method can pose challenges: Tools that have been developed for one evaluation may not prove suitable for another, at least not without careful modification. At the same time, creating new tools requires expertise in measurement and instrument design. How do you know if an off-the-shelf instrument is appropriate for your needs? Good question! When considering the use of an instrument, keep in mind the following: What is the instrument measuring? We’ve gathered a collection of tools and instruments that can be used for evaluating outcomes of informal STEM education projects or that can serve as starting points for modification. Assessment Tools in Informal Science (ATIS): ATIS is a searchable website of assessment tools for STEM learning in educational, especially out-of-school time, environments.

Developing ELL Programs: Introduction. EvSys 0. Evaluation: What is it and why do it? | Meera. Evaluation. What associations does this word bring to mind? Do you see evaluation as an invaluable tool to improve your program? Or do you find it intimidating because you don't know much about it? Regardless of your perspective on evaluation, MEERA is here to help! The purpose of this introductory section is to provide you with some useful background information on evaluation. Table of Contents What is evaluation? Evaluation is a process that critically examines a program. Should I evaluate my program? Experts stress that evaluation can: Improve program design and implementation. It is important to periodically assess and adapt your activities to ensure they are as effective as they can be.

Demonstrate program impact. Evaluation enables you to demonstrate your program’s success or progress. Why conduct evaluations? Gus Medina, Project Manager, Environmental Education and Training Partnership There are some situations where evaluation may not be a good idea Adapted from: Pancer, s. References. RR 11 15. Resources for Program Evaluation and Performance Monitoring | Family Health Outcomes Project. Program Evaluation Basics FHOP Planning Guide Chapter 4 “Developing Objectives, Performance Measures, and an Action Plan” (pdf) The Planning Cycle Step 5: Develop Objectives and Performance Measures CDC Healthy Youth! Program Evaluation Resources: • Brief 13 Data Collection Methods for Program Evaluation: Focus Groups pdf 153K, text 5K • Brief 14 Data Collection Methods for Program Evaluation: Questionnaires pdf 220K, text 5K • Brief 15 Checklist to Evaluate the Quality of Questions pdf 152K, text 5K • Brief 16 Data Collection Methods for Program Evaluation: Observation pdf 204K, text 7K • Brief 17 Data Collection Methods for Program Evaluation: Interviews pdf 178K, text 10K • Brief 18 Data Collection Methods for Program Evaluation: Document Review pdf 166K, text 10K Other Evaluation Resources from CDC W.K.

The Community Toolbox – Evaluating the Initiative This part of the Community Tool Box provides a framework and supports for developing an evaluation of a community program or initiative. Selected Evaluation Terms. Evaluation terminology can be confusing to even the most seasoned of evaluators. This resource provides commonly accepted definitions to evaluation terms frequently used in the out-of-school time field.

For “real life” examples of how these terms are used, check our Out-of-School Time Evaluation Database, currently offering detailed evaluation profiles of over 20 out-of-school time programs nationwide. Accountability: Accountability means that a public or private agency, such as a state education agency, that enters into a contractual agreement to perform a service, such as administer 21st Century Community Learning Center programs, will be held answerable for performing according to agreed-on terms, within a specified time period, and with a stipulated use of resources and performance standards.

Benchmark: (1) An intermediate target to measure progress in a given period using a certain indicator. (2) A reference point or standard against which to compare performance or achievements. 1. Program Development and Evaluation – University of Wisconsin-Extension. This course provides a holistic approach to planning and evaluating education and outreach programs. It helps program practitioners use and apply logic models – a framework and way of thinking to help us improve our work and be accountable for results. In this guide you will learn how to seamlessly import data from Qualtrics into MAXQDA. We’ll cover export options in Qualtrics, data cleanup in Excel, and import options in MAXQDA.

By Ellen Bechtol & Christian Schmieder, Program Development & Evaluation Unit, UW-Cooperative Extension. In this guide you will learn how to pull a random sample from a MAXQDA dataset, using the random cell function in Excel. InformalScience : informal learning projects, research and evaluation. The Evaluation Center. Basic Guide to Program Evaluation (Including Many Additional Resources) © Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC. Adapted from the Field Guide to Nonprofit Program Design, Marketing and Evaluation. This document provides guidance toward planning and implementing an evaluation process for for-profit or nonprofit programs -- there are many kinds of evaluations that can be applied to programs, for example, goals-based, process-based and outcomes-based.

Nonprofit organizations are increasingly interested in outcomes-based evaluation. If you are interested in learning more about outcomes-based evaluation, then see the sections Outcomes-Evaluation and Outcomes-Based Evaluations in Nonprofit Organizations. Sections of This Topic Include Online Guides, etc.Outcomes-EvaluationGeneral Resources Also see Evaluations (many kinds) Related Library Topics Related Library Topics Also See the Library's Blogs Related to Program Evaluations Library's Business Planning Blog Library's Building a Business Blog Library's Strategic Planning Blog 1.. 2. 3. 1. Home. Evaluation Tools — MEASURE Evaluation. MEASURE Evaluation developed these tools with the goal of maximizing program results through the systematic collection and analysis of information and evidence about health program performance and impact.

How Do We Know if a Program Made a Difference? A Guide to Statistical Methods for Program Impact EvaluationThis manual provides an overview of core statistical and econometric methods for program impact evaluation and, more generally, causal modelling. Evaluating Family Planning Programs with Adaptations for Reproductive HealthThis manual prepares readers to differentiate between the main types of program evaluation, program monitoring, and impact assessment. Evaluating HIV/AIDS Prevention Projects: A Manual for Nongovernmental OrganizationsThe purpose of this manual is to demystify the evaluation process, especially for staff who are not specialized in evaluation techniques.

Survey Tools. Basics of Developing Questionnaires. © Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC. Adapted from the Field Guide to Consulting and Organizational Development. Whether developing questions for questionnaires or interviews or focus groups, there are certain guidelines that help to ensure that respondents provide information that is useful and can later be analyzed. Sections of This Topic Include Types of Information Collected by Questions Two Types of Questions Key PreparationDirections to RespondentsContent of the QuestionsWording of the QuestionsOrder of the Questions Also see Related Library Topics There Is No Hope Of Doing Perfect Research Creating and Implementing Your Data Collection Plan Also See the Library's Blogs Related to Developing Questionnaires In addition to the articles on this current page, also see the following blogs that have posts related to Developing Questionnaires.

Library's Business Planning Blog Library's Building a Business Blog Library's Strategic Planning Blog 1. 2. Also see. Program Evaluation Standards Statements « Joint Committee on Standards for Educational Evaluation. In order to gain familiarity with the conceptual and practical foundations of these standards and their applications to extended cases, the JCSEE strongly encourages all evaluators and evaluation users to read the complete book, available for purchase at and referenced as follows: Yarbrough, D. B., Shulha, L. M., Hopson, R. The standard names and statements, as reproduced below, are under copyright to the JCSEE and are approved as an American National Standard. Utility Standards The utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs. U1 Evaluator Credibility Evaluations should be conducted by qualified people who establish and maintain credibility in the evaluation context.

Feasibility Standards The feasibility standards are intended to increase evaluation effectiveness and efficiency. Propriety Standards Accuracy Standards. MandE tools methods approaches. 3ieimpact-3ie:International Initiative for Impact Evaluation | Evaluating Impact, Informing Policy, Improving Lives. CDC Evaluation Resources - Program Evaluation - CDC. AEA - American Evaluation Association : Home. Evaluation Toolbox.