- Design of quasi-experiments
-
The design of a quasi-experiment relates to a particular type of experiment or other study in which one has little or no control over the allocation of the treatments or other factors being studied. The key difference in this empirical approach is the lack of random assignment. Another unique element often involved in this experimentation method is use of time series analysis, both interrupted and non-interrupted. Experiments designed in this manner are referred to as having quasi-experimental design.
Contents
Design
The first part of creating a quasi-experimental design is to identify the variables. The quasi-independent variable will be the x-variable, the variable that is manipulated in order to affect a dependent variable. “X” is generally a grouping variable with different levels. Grouping means two or more groups such as a treatment group and a placebo or control group (placebos are more frequently used in medical or physiological experiments). The predicted outcome is the dependent variable, which is the y-variable. In a time series analysis, the dependent variable is observed over time for any changes that may take place. Once the variables have been identified and defined, a procedure should then be implemented and group differences should be examined.[1]
Advantages
Since quasi-experimental designs are used when randomization is impractical and/or unethical, they are typically easier to set up than true experimental designs, which require [2] random assignment of subjects. Additionally, utilizing quasi-experimental designs minimizes threats to external validity as natural environments do not suffer the same problems of artificiality as compared to a well-controlled laboratory setting.[3] Since quasi-experiments are natural experiments, findings in one may be applied to other subjects and settings, allowing for some generalizations to be made about population. Also, this experimentation method is efficient in longitudinal research that involves longer time periods which can be followed up in different environments.
Disadvantages
The control allowed through the manipulation of the quasi-independent variable can lead to unnatural circumstances, although the dangers of artificiality are considerably less relative to true experiments (quasi-experimental designs are often chosen for field studies where the random assignment of experimental subjects is impractical, unethical, or impossible). Also, the lack of random assignment in the quasi-experimental design method may allow studies to be more feasible, but this also poses many challenges for the investigator in terms of internal validity. This deficiency in randomization makes it harder to rule out confounding variables and introduces new threats to internal validity.[4] Because randomization is absent, some knowledge about the data can be approximated, but conclusions of causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. Moreover, even if these threats to internal validity are assessed, causation still cannot be fully established because the experimenter does not have total control over extraneous variables.[5]
Another view
The term quasi-experiment refers to a type of research design that shares many similarities with the traditional experimental design or randomized controlled trial, but specifically lacks the element of random assignment. With random assignment, participants have the same chance of being assigned to a given treatment condition. As such, random assignment ensures that both the experimental and control groups are equivalent. In a quasi-experimental design, assignment to a given treatment condition is based on something other than random assignment. Depending on the type of quasi-experimental design, the researcher might have control over assignment to the treatment condition but use some criteria other than random assignment (e.g., a cutoff score) to determine which participants receive the treatment, or the researcher may have no control over the treatment condition assignment and the criteria used for assignment may be unknown. Factors such as cost, feasibility, political concerns, or convenience may influence how or if participants are assigned to a given treatment conditions, and as such, quasi-experiments are subject to concerns regarding internal validity (i.e., can the results of the experiment be used to make a causal inference?).
There are several types of quasi-experimental designs ranging from the simple to the complex, each having different strengths, weaknesses and applications. These designs include (but are not limited to)[6]:
- The one-group posttest only
- The one-group pretest posttest
- The removed-treatment design
- The case-control design
- The non-equivalent control groups design
- The interrupted time-series design
- The regression discontinuity design
Of all of these designs, the regression discontinuity design comes the closest to the experimental design, as the experimenter maintains control of the treatment application and it is known to “yield an ubiased estimate of the treatment effects”.[7] It does, however, require more participants and proper modeling of the functional form between the assignment and outcome variable in order to yield the same power as a traditional experimental design
Though quasi-experiments are sometimes shunned by those who consider themselves to be experimental purists (leading Donald T. Campbell to coin them “queasy experiment”{{Citation needed|date=July 2011), they are exceptionally useful in areas where it is not feasible or desirable to conduct an experiment or randomized control trial. Such instances include evaluating the impact of public policy changes, educational interventions or large scale health interventions. The primary drawback of quasi-experimental designs is that they cannot eliminate the possibility of confounding bias, which can hinder one’s ability to draw causal inferences. This drawback is often used to discount quasi-experimental results. However, such bias can be controlled for using various statistical techniques such as multiple regression, if one can identify and measure the confounding variable(s). Such techniques can be used to model and partial out the effects of confounding variables techniques, thereby improving the accuracy of the results obtained from quasi-experiments. Moreover, the developing use of propensity scores to match participants on variables important to the treatment selection process can also improve the accuracy of quasi-experimental results. In sum, quasi-experiments are a valuable tool, especially for the applied researcher. On their own, quasi-experimental designs do not allow one to make definitive causal inferences; however, they provide necessary and valuable information that cannot be obtained by experimental methods alone. Researchers, especially those interested in investigating applied research questions, should move beyond the traditional experimental design and avail themselves of the possibilities inherent in quasi-experimental designs.[8]
Notes
- ^ Gribbons, Barry & Herman, Joan (1997) "True and quasi-experimental designs", Practical Assessment, Research & Evaluation, 5(14).
- ^ CHARM-Controlled Experiments
- ^ http://www.osulb.edu/~msaintg/ppa696/696quasi.htm
- ^ Lynda S. Robson, Harry S. Shannon, Linda M. Goldenhar, Andrew R. Hale (2001)Quasi-experimental and experimental designs: more powerful evaluation designs, Chapter 4 of Guide to Evaluating the Effectiveness of Strategies for Preventing Work Injuries: How to show whether a safety intervention really works, Institute for Work & Health, Canada
- ^ Research Methods: Planning: Quasi-Exper. Designs
- ^ Shadish et al. (2002)[page needed]
- ^ Shadish et al. (2002, p.242)
- ^ Shadish et al.(2002)[page needed]
References
- Shadish, W.R., Cook, T.D. & Campbell, D.T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. New York: Houghton Mifflin Company.
External links
- Quasi-Experimental Design at the Research Methods Knowledge Base
Wikimedia Foundation. 2010.