background preloader

Design: Repeated Measure Articles

Facebook Twitter

Social Science Research Methods Collection

Repeated Measures Design. Types of Experimental Designs Handout. Repeated Measures ANOVA - Understanding a Repeated Measures ANOVA. Introduction Repeated measures ANOVA is the equivalent of the one-way ANOVA, but for related, not independent groups, and is the extension of the dependent t-test. A repeated measures ANOVA is also referred to as a within-subjects ANOVA or ANOVA for correlated samples. All these names imply the nature of the repeated measures ANOVA, that of a test to detect any overall differences between related means. There are many complex designs that can make use of repeated measures, but throughout this guide, we will be referring to the most simple case, that of a one-way repeated measures ANOVA.

When to use a Repeated Measures ANOVA We can analyse data using a repeated measures ANOVA for two types of study design. In repeated measures ANOVA, the independent variable has categories called levels or related groups. The above two schematics have shown an example of each type of repeated measures ANOVA design, but you will also often see these designs expressed in tabular form, such as shown below: QE2010 Types of Designs: Social Science. Pretest-Posttest Designs - Experimental Research. ProcmixedSASEG. Cornell Stat News #79. Threats to validity of Research Design. The books by Campbell and Stanley (1963), Cook and Campbell (1979), and Shadish, Cook, and Campbell, (2002) are considered seminal works in the field of experimental design. The following is summary of their books with insertion of our examples.

Problem and Background Experimental method and essay-writing Campbell and Stanley point out that adherence to experimentation dominated the field of education through the 1920s (Thorndike era) but that this gave way to great pessimism and rejection by the late 1930s. However, it should be noted that a departure from experimentation to essay writing (Thorndike to Gestalt Psychology) occurred most often by people already adept at the experimental tradition.

Cumulative wisdom An interesting point made is that experiments which produce or support opposing theories against each other probably will not have clear cut outcomes. Factors Jeopardizing Internal and External Validity Three Experimental Designs Three True Experimental Designs Reference. Pre exerimental designs. Edwards2001: Ten Myths on Difference Score Analysis. Prepost. Mixed Analysis: Chap10. Mixed%20Design. Research Methods PowerPoint presentation | free to view - id: f0373-YTVhZ. Title: Research Methods 1Research Methods Collecting, Processing and Analyzing Data 2Aims of the SessionThe purpose of this session is to Alert you to the different types of methodology available to you in your research Make you aware of the different techniques that you might use in collecting, presenting and analysing data Discuss the different kinds of problems that you might encounter in pursuing your research. 3ContentsDeveloping Your Research Questions The Different Types of Research Selecting Appropriate Research Methods Robustness of Methods Structuring Your Methodology Data Analysis Problems with The Research Process Summary 41.

Developing your Research QuestionsThis section of the presentation examines what you need to do in order to focus your research. 5The Purpose of Research The purpose of research is to contribute to a current academic debate, and possibly to advance knowledge in some manner. Chapter 37. Operations in Evaluating Community Interventions | Section 4. Selecting an Appropriate Design for the Evaluation. What do we mean by a design for the evaluation? Why should you choose a design for your evaluation?

When should you do so? Who should be involved in choosing a design? How do you select an appropriate design for your evaluation? When you hear the word “experiment,” it may call up pictures of people in long white lab coats peering through microscopes. In reality, an experiment is just trying something out to see how or why or whether it works. It can be as simple as putting a different spice in your favorite dish, or as complex as developing and testing a comprehensive effort to improve child health outcomes in a city or state.

Academics and other researchers in public health and the social sciences conduct experiments to understand how environments affect behavior and outcomes, so their experiments usually involve people and aspects of the environment. What do we mean by a design for the evaluation? Every evaluation is essentially a research or discovery project. Threats to internal validity. Comparative Analysis on Pre/Post test data. The Analysis of Pre-test/Post-test Experiments. Consider a randomized, controlled experiment in which measurements are made before and after treatment. One way to analyze the data is by comparing the treatments with respect to their post-test measurements. [figure] Even though subjects are assigned to treatment at random, there may be some concern that any difference in the post-test measurements might be due a failure in the randomization.

Perhaps the groups differed in their pre-test measurements.* [figure] One way around the problem is to compare the groups on differences between post-test and pretest, sometimes called change scores or gain scores. T-test of the differences; 2-group ANOVA of the differences, repeated measures analysis of variance. However, there is another approach that could be used--analysis of covariance, in which the post-test measurement is the response, treatment is the design factor, and the pre-test is a covariate. The problem was first stated by Lord (1967: Psych. Lord was wrong. Summary. Anova - Best practice when analysing pre-post treatment-control designs.