Research Design discussions

Please read.. I attached 4 discussions- Exercise 1, 2, 3, 4  All need to be done on separate page with the this book as the reference.  

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Edmonds, W. A., & Kennedy, T. D. (2017). An applied guide to research designs: Quantitative, qualitative, and mixed methods (2nd ed). Thousand Oaks, CA: Sage.

I also copy the chapter readings so you can have because this professor wrote this book and he knows if we are messing around with content. 

Exercise 1 – Research Design Validity

Discuss the importance of validity and research design.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Next, choose one type of validity (internal, external, construct, or statistical conclusion) and discuss its relevance to experimental, quasi-experimental, and non-experimental research.

Exercise 2 – Comparison Groups

Comparison groups are one of the important elements to the scientific control of a research design.

Choose one type of comparison group from the list provided in the book and expand upon how the inclusion of this type of comparison group would improve the overall validity of the findings.

Exercise 3 –

Control

Techniques

Control is an important element in any type of research.

Considering experimental research, come up with a hypothetical research scenario and apply each of the five types of control to the scenario. Use specific examples to illustrate your point.

Exercise 4 – Establishing Cause and Effect

No unread replies.No replies.

What are the major differences between experimental, quasi experimental, and non-experimental research?

Discuss the three major conditions to meet cause and effect (be sure to review your text for further information). Provide a typical experimental “weakness” that wouldn’t allow a researcher to determine cause and effect.

Chapter 1 A Primer of the Scientific Method and Relevant Components

The primary objective of this book is to help researchers understand and select appropriate designs for their investigations within the field, lab, or virtual environment. Lacking a proper conceptualization of a research design makes it difficult to apply an appropriate design based on the research question(s) or stated hypotheses. Implementing a flawed or inappropriate design will unequivocally lead to spurious, meaningless, or invalid results. Again, the concept of validity cannot be emphasized enough when conducting research. Validity maintains many facets (e.g., statistical validity or validity pertaining to psychometric properties of instrumentation), operates on a continuum, and deserves equal attention at each level of the research process. Aspects of validity are discussed later in this chapter. Nonetheless, the research question, hypothesis, objective, or aim is the primary step for the selection of a research design.

The purpose of a research design is to provide a conceptual framework that will allow the researcher to answer specific research questions while using sound principles of scientific inquiry. The concept behind research designs is intuitively straightforward, but applying these designs in real-life situations can be complex. More specifically, researchers face the challenge of (a) manipulating (or exploring) the social systems of interest, (b) using measurement tools (or data collection techniques) that maintain adequate levels of validity and reliability, and (c) controlling the interrelationship between multiple variables or indicating emerging themes that can lead to error in the form of confounding effects in the results. Therefore, utilizing and following the tenets of a sound research design is one of the most fundamental aspects of the scientific method. Put simply, the research design is the structure of investigation, conceived so as to obtain the “answer” to research questions or hypotheses.

The Scientific Method

All researchers who attempt to formulate conclusions from a particular path of inquiry use aspects of the scientific method. The presentation of the scientific method and how it is interpreted can vary from field to field and method (qualitative) to method (quantitative), but the general premise is not altered. Although there are many ways or avenues to “knowing,” such as sources from authorities or basic common sense, the sound application of the scientific method allows researchers to reveal valid findings based on a series of systematic steps. Within the social sciences, the general steps include the following: (a) state the problem, (b) formulate the hypothesis, (c) design the experiment, (d) make observations, (e) interpret data, (f) draw conclusions, and (g) accept or reject the hypothesis. All research in quantitative methods, from experimental to nonexperimental, should employ the steps of the scientific method in an attempt to produce reliable and valid results.

The scientific method can be likened to an association of techniques rather than an exact formula; therefore, we expand the steps as a means to be more specific and relevant for research in education and the social sciences. As seen in 

Figure 1.1

, these steps include the following: (a) identify a research problem, (b) establish the theoretical framework, (c) indicate the purpose and research questions (or hypotheses), (d) develop the methodology, (e) collect the data, (f) analyze and interpret the data, and (g) report the results. This book targets the critical component of the scientific method, referred to in Figure 1.1 as Design the Study, which is the point in the process when the appropriate research design is selected. We do not focus on prior aspects of the scientific method or any steps that come after the Design the Study step, including procedures for conducting literature reviews, developing research questions, or discussions on the nature of knowledge, epistemology, ontology, and worldviews. Specifically, this book focuses on the conceptualization, selection, and application of common research designs in the field of education and the social and behavioral sciences.

Again, although the general premise is the same, the scientific method is known to slightly vary from each field of inquiry (and type of method). The technique presented here may not exactly follow the logic required for research using qualitative methods; however, the conceptualization of research designs remains the same. We refer the reader to Jaccard and Jacoby (2010) for a review on the various scientific approaches associated with qualitative methods, such as emergent- and discovery-oriented frameworks.

Figure 1.1 The Scientific Method

Validity and Research Designs

The overarching goal of research is to reach valid outcomes based upon the appropriate application of the scientific method. In reference to

Independent and Dependent Variables

In simple terms, the independent variable (IV) is the variable that is manipulated (i.e., controlled) by the researcher as a means to test its impact on the dependent variable, otherwise known as the treatment effect. In the classical experimental study, the IV is the treatment, program, or intervention. For example, in a psychology-based study, the IV can be a cognitive-behavioral intervention; the intervention is manipulated by the researcher, who controls the frequency and intensity of the therapy on the subject. In a pharmaceutical study, the IV would typically be a treatment pill, and in agriculture the treatment often is fertilizer. In regard to experimental research, the IVs are always manipulated (controlled) based on the appropriate theoretical tenets that posit the association between the IV and the dependent variable.

Statistical software packages (e.g., SPSS) refer to the IV differently. For instance, the IV for the analysis of variance (ANOVA) in SPSS is the “breakdown” variable and is called a factor. The IV is represented as levels in the analysis (i.e., the treatment group is Level 1, and the control group is Level 2). For nonexperimental research that uses regression analysis, the IV is referred to as the predictor variable. In research that applies control in the form of statistical procedures to variables that were not or cannot be manipulated, the IVs are sometimes referred to as quasi- or alternate independent variables. These variables are typically demographic variables, such as gender, ethnicity, or socioeconomic status. As a reminder, in nonexperimental research the IV (or predictor) is not manipulated whether it is a categorical variable such as hair color or a continuous variable such as intelligence. The only form of control that is exhibited on these types of variables is that of statistical procedures. Manipulation and elimination do not apply (see types of control later in the chapter).

The dependent variable (DV) is simply the outcome variable, and its variability is a function of IV and its impact on it (i.e., treatment effect). For example, what is the impact of the cognitive-behavioral intervention on psychological well-being? In this research question, the DV is psychological well-being. In regard to nonexperimental research, the IVs are not manipulated, and the IVs are referred to as predictors and the DVs are criterion variables. During the development of research questions, it is critical to first define the DV conceptually, then define it operationally.

A conceptual definition is a critical element to the research process and involves scientifically defining the construct so it can be systematically measured. The conceptual definition is considered to be the (scientific) textbook definition. The construct must then be operationally defined to model the conceptual definition.

An operational definition is the actual method, tool, or technique that indicates how the construct will be measured (see 

Figure 1.2

).

Consider the following example research question: What is the relationship between Emotional Intelligence and conventional Academic Performance?

Figure 1.2 Conceptual and Operational Definitions

Internal Validity

Internal validity is the extent to which the outcome was based on the independent variable (i.e., the treatment), as opposed to extraneous or unaccounted-for variables. Specifically, internal validity has to do with causal inferences—hence, the reason why it does not apply to nonexperimental research. The goal of nonexperimental research is to describe phenomena or to explain or predict the relationship between variables, not to infer causation (although there are circumstances when cause and effect can be inferred from nonexperimental research, and this is discussed later in this book). The identification of any explanation that could be responsible for an outcome (effect) outside of the independent variable (cause) is considered to be a threat. The most common threats to internal validity seen in education and the social and behavioral sciences are detailed in 

Table 1.1

. It should be noted that many texts do not indentify sequencing effects in the common lists of threats; however, it is placed here, as it is a primary threat in repeated-measures approaches.

Construct Validity

Construct validity refers to the extent a generalization can be made from the operationalization (i.e., the scientific measurement) of the theoretical construct back to the conceptual basis responsible for the change in the outcome. Again, although the list of threats to construct validity seen in 

Table 1.3

 are defined to imply issues regarding cause-effect relations, the premise of construct validity should apply to all types of research. Some authors categorize some of these threats as social threats to internal validity, and some authors simply categorize some of the threats listed in Table 1.3 as threats to internal validity. The categorization of these threats can be debated, but the premise of the threats to validity cannot be argued (i.e., a violation of construct validity affects the overall validity of the study in the same way as a violation of internal validity).

Statistical Conclusion Validity

Statistical conclusion validity is the extent to which the statistical covariation (relationship) between the treatment and the outcome is accurate. Specifically, the statistical inferences regarding statistical conclusion validity has to do with the ability with which one can detect the relationship between the treatment and outcome, as well as determine the strength of the relationship between the two. As seen in 

Table 1.4

, the most notable threats to statistical conclusion validity are outlined. Violating a threat to statistical conclusion validity typically will result in the overestimation or underestimation of the relationship between the treatment and outcome in experimental research. A violation can also result in the overestimation or underestimation of the explained or predicted relationships between variables as seen in nonexperimental research.

Design Logic

The overarching objective of a research design is to provide a framework from which specific research questions or hypotheses can be answered while using the scientific method. The concept of a research design and its structure is, at face value, rather simplistic. However, complexities arise when researchers apply research designs within social science paradigms. These include, but are not limited to, logistical issues, lack of control over certain variables, psychometric issues, and theoretical frameworks that are not well developed. In addition, with regard to statistical conclusion validity, a researcher can apply sound principles of scientific inquiry while applying an appropriate research design but may compromise the findings with inappropriate data collection strategies, faulty or “bad” data, or misdirected statistical analyses. Shadish and colleagues (2002) emphasized the importance of structural design features and that researchers should focus on the theory of design logic as the most important feature in determining valid outcomes (or testing causal propositions). The logic of research designs is ultimately embedded within the scientific method, and applying the principles of sound scientific inquiry within this phase is of the utmost importance and the primary focus of this guide.

Control

Control is an important element to securing the validity of research designs within quantitative methods (i.e., experimental, quasi-experimental, and nonexperimental research). However, within qualitative methods, behavior is generally studied as it occurs naturally with no manipulation or control. Control refers to the concept of holding variables constant or systematically varying the conditions of variables based on theoretical considerations as a means to minimize the influence of unwanted variables (i.e., extraneous variables). Control can be applied actively within quantitative methods through (a) manipulation, (b) elimination, (c) inclusion, (d) group or condition assignment, or (e) statistical procedures.

Manipulation.

Manipulation is applied by manipulating (i.e., controlling) the independent variable(s). For example, a researcher can manipulate a behavioral intervention by systematically applying and removing the intervention or by controlling the frequency and duration of the application (see section on independent variables).

Elimination.

Elimination is conducted when a researcher holds a variable or converts it to a constant. If, for example, a researcher ensures the temperature in a lab is set exactly to 76° Fahrenheit for both conditions in a biofeedback study, then the variable of temperature is eliminated as a factor because it is held as a constant.

Inclusion.

Inclusion refers to the addition of an extraneous variable into the design to test its affect on the outcome (i.e., dependent variable). For example, a researcher can include both males and females into a factorial design to examine the independent effects gender has on the outcome. Inclusion can also refer to the addition of a control or comparison group within the research design.

Group assignment.

Group assignment is another major form of control (see more on group and condition assignments later). For the between-subjects approach, a researcher can exercise control through random assignment, using a matching technique, or applying a cutoff score as means to assign participants to conditions. For the repeated-measures approach, control is exhibited when the researcher employs the technique of counterbalancing to variably expose each group or individual to all the levels of the independent variable.

Statistical procedures.

Statistical procedures are exhibited on variables, for example, by systematically deleting, combining, or not including cases and/or variables (i.e., removing outliers) within the analysis. This is part of the data-screening process as well. As illustrated in 

Table 1.5

, all of the major forms of control can be applied in the application of designs for experimental and quasi-experimental research. The only form of control that can be applied to nonexperimental research is statistical control.

Comparison and Control Groups

The group that does not receive the actual treatment, or intervention, is typically designated as the control group. Control groups fall under the group or condition assignment aspect of control. Control groups are comparison groups and are primarily used to address threats to internal validity such as history, maturation, selection, and testing. A comparison group refers to the group or groups that are not part of the primary focus of the investigation but allow the researcher to draw certain conclusions and strengthen aspects of internal validity. There are several distinctions and variations of the control group that should be clarified.

· Control group. The control group, also known as the no-contact control, receives no treatment and no interaction.

· Attention control group. The attention control group, also known as the attention-placebo, receives attention in the form of a pseudo-intervention to control for reactivity to assessment (i.e., the participant’s awareness of being studied may influence the outcome).

· Nonrandomly assigned control group. The nonrandomly assigned control is used when a no-treatment control group cannot be created through random assignment.

· Wait-list control group. The wait-list control group is withheld from the treatment for a certain period of time, then the treatment is provided. The time in which the treatment is provided is based on theoretical tenets and on the pretest and posttest assessment of the original treatment group.

· Historical control group. Historical control is a control group that is chosen from a group of participants who were observed at some time in the past or for whom data are available through archival records, sometimes referred to as cohort controls (i.e., a homogenous successive group) and useful in quasi-experimental research.

Sampling Strategies

A major element to the logic of design extends to sampling strategies. When developing quantitative, qualitative, and mixed methods studies, it is important to identify the individuals (or extant databases) from whom you plan to collect data. To start, the unit of analysismust be indicated. The unit of analysis is the level or distinction of an entity that will be the focus of the study. Most commonly, in social science research, the unit of analysis is at the individual or group level, but it can also be at the programmatic level (e.g., institution or state level).

There are instances when researchers identify units nested within an aggregated group (e.g., a portion of students within a classroom) and refer to this as nested designs or models. It should be noted that examining nested units is not a unique design, but rather a form of a sampling strategy, and the relevant aspects of statistical conclusion validity should be accounted for (e.g., independence assumptions). After identifying the unit, the next step is to identify the population (assuming the individual or group is the unit of analysis), which is the group of individuals who share similar characteristics (e.g., all astronauts). Logistically, it is impossible in most circumstances to collect data from an entire population; therefore, as illustrated in 

Figure 1.4

, a sample (or subset) from the population is identified (e.g., astronauts who have completed a minimum of four human space-flight missions and work for NASA).

The goal often, but not always, is to eventually generalize the finding to the entire population. There are two major types of sampling strategies, probability and nonprobability sampling. In experimental, quasi-experimental, and nonexperimental (survey and observational) research, the focus should be on probability sampling (identifying and selecting individuals who are considered representative of the population). Many researchers also suggest that some form of probability sampling for observational (correlational) approaches (predictive designs) must be employed—otherwise the statistical outcomes cannot be generalizable. When it is not logistically possible to use probability sampling, or as seen in qualitative methods not necessary, some researchers use nonprobability sampling techniques (i.e., the researcher selects participants on a specific criterion and/or based on availability). The following list includes the major types of probability and nonprobability sampling techniques.

Probability Sampling Techniques

· Simple random sampling. Every individual within the population has an equal chance of being selected.

· Cluster sampling. Also known as area sampling, this allows the researcher to divide the population into clusters (based on regions) and then randomly select from the clusters.

· Stratified sampling. The researcher divides the population into homogeneous subgroups (e.g., based on age) and then randomly selects participants from each subgroup.

· Systematic sampling. Once the size of the sample is identified, the researcher selects every nth individual (e.g., every third person on the list of participants is selected) until the desired sample size is fulfilled.

· Multistage sampling. The researcher combines any of the probability sampling techniques as a means to randomly select individuals from the population.

Nonprobability Sampling Techniques

· Convenience sampling. Sometimes referred to as haphazard or accidental sampling, the investigator selects individuals because they are available and willing to participate.

· Purposive sampling. The researcher selects individuals to participate based on a specific need or purpose (i.e., based on the research objective, design, and target population); this is most commonly used for qualitative methods (see Patton, 2002). The most common form of purposeful sampling is criterion sampling (i.e., seeking participants who meet a specific criterion). Variations of purposive sampling include theory-guided, snowball, expert, and heterogeneity sampling. Theoretical sampling is a type of purposive sampling used in grounded-theory approaches. We refer the reader to Palinkas et al. (2014) for a review of recommendations on how to combine various sampling strategies for the qualitative and mixed methods.

The reader is referred to the following book for an in-depth review of a topic related to sampling strategies for quantitative and qualitative methods:

· Levy, P. S., & Lemeshow, S. (2009). Sampling of populations: Methods and applications (4th ed.). New York, NY: John Wiley & Sons.

Now that we covered a majority of the relevant aspects to research design, which is the “Design the Study” phase of the scientific method, we now present some steps that will help researchers select the most appropriate design. In the later chapters, we present a multitude of research designs used in quantitative, qualitative, and mixed methods. Therefore, it is important to review and understand the applications of these designs while regularly returning to this chapter to review the critical elements of design control and types of validity, for example. Let’s now examine the role of the research question.

Research Questions

Simply put, the primary research question sets the foundation and drives the decision of the application of the most appropriate research design. However, there are several terms related to research questions that should be distinguished. First, in general, studies will include an overarching observation deemed worthy of research. The “observation” is a general statement regarding the area of interest and identifies the area of need or concern.

Based on the initial observation, specific variables lead the researchers to the appropriate review of the literature and a theoretical framework is typically established. The purpose statement is then used to clarify the focus of the study, and finally, the primary research question ensues. Research studies can also include hypotheses or research objectives. Many qualitative studies include research aims as opposed to research questions. In quantitative methods (this includes mixed methods), the research question (hypotheses and objectives) determines (a) the population (and sample) to be investigated, (b) the context, (c) the variables to be operationalized, and (d) the research design to be employed.

Types of Inquiry

There are several ways to form a testable research inquiry. For qualitative methods, these can be posed as research questions, aims, or objectives 

Part I

 Quantitative Methods for Experimental and Quasi-Experimental Research

Part I includes four popular approaches to the quantitative method (experimental and quasi-experimental only), followed by some of the associated basic designs (accompanied by brief descriptions of published studies that used the design). Visit the companion website at 

study.sagepub.com/edmonds2e

 to access valuable instructor and student resources. These resources include PowerPoint slides, discussion questions, class activities, SAGE journal articles, web resources, and online data sets.

Figure I.1 Quantitative Method Flowchart

Note: Quantitative methods for experimental and quasi-experimental research are shown here, followed by the approach and then the design.

Research in quantitative methods essentially refers to the application of the systematic steps of the scientific method, while using quantitative properties (i.e., numerical systems) to research the relationships or effects of specific variables. Measurement is the critical component of the quantitative method. Measurement reveals and illustrates the relationship between quantitatively derived variables. Variables within quantitative methods must be, first, conceptually defined (i.e., the scientific definition), then operationalized (i.e., determine the appropriate measurement tool based on the conceptual definition). Research in quantitative methods is typically referred to as a deductive process and iterative in nature. That is, based on the findings, a theory is supported (or not), expanded, or refined and further tested.

Researchers must employ the following steps when determining the appropriate quantitative research design. First, a measurable or testable research question (or hypothesis) must be formulated. The question must maintain the following qualities: (a) precision, (b) viability, and (c) relevance. The question must be precise and well formulated. The more precise, the easier it is to appropriately operationalize the variables of interest. The question must be viable in that it is logistically feasible or plausible to collect data on the variable(s) of interest. The question must also be relevant so that the result of the findings will maintain an appropriate level of practical and scientific meaning. The second step includes choosing the appropriate design based on the primary research question, the variables of interest, and logistical considerations. The researcher must also determine if randomization to conditions is possible or plausible. In addition, decisions must be made about how and where the data will be collected. The design will assist in determining when the data will be collected. The unit of analysis (i.e., individual, group, or program level), population, sample, and sampling procedures should be identified in this step. Third, the variables must be operationalized. And last, the data are collected following the format of the framework provided by the research design of choice.

Experimental Research

Experimental research (sometimes referred to as randomized experiments) is considered to be the most powerful type of research in determining causation among variables. Cook and Campbell (1979) presented three conditions that must be met in order to establish cause and effect:

1. Covariation (the change in the cause must be related to the effect)

2. Temporal precedence (the cause must precede the effect)

3. No plausible alternative explanations (the cause must be the only explanation for the effect)

The essential features of experimental research are the sound application of the elements of control: (a) manipulation, (b) elimination, (c) inclusion, (d) group or condition assignment, or (e) statistical procedures. Random assignment (not to be confused with random selection) of participants to conditions (or random assignment of conditions to participants [counterbalancing] as seen in repeated-measures approaches) is a critical step, which allows for increased control (improved internal validity) and limits the impact of the confounding effects of variables that are not being studied.

The random assignment to each group (condition) theoretically ensures that the groups are “probabilistically” equivalent (controlling for selection bias), and any differences observed in the pretests (if collected) are considered due to chance. Therefore, if all threats to internal, external, construct, and statistical conclusion validity were secured at “adequate” levels (i.e., all plausible alternative explanations are accounted for), the differences observed in the posttest measures can be attributed fully to the experimental treatment (i.e., cause and effect can be established). Conceptually, a causal effect is defined as a comparison of outcomes derived from treatment and control conditions on a common set of units (e.g., school, person).

The strength of experimental research rests in the reduction of threats to internal validity. Many threats are controlled for through the application of random assignment of participants to conditions. Random selection, on the other hand, is related to sampling procedures and is a major factor in establishing external validity (i.e., generalizability of results). Randomly selecting a sample from a population would be conducted so that the sample would better represent the population. However, Lee and Rubin (2015) presented a statistical approach that allows researchers to draw data from existing data sets from experimental research and examine subgroups (post hoc subgroup analysis). Nonetheless, random assignment is related to design, and random selection is related to sampling procedures. Shadish, Cook, and Campbell (2002) introduced the term generalized causal inference. They posit that if a researcher follows the appropriate tenets of experimental design logic (e.g., includes the appropriate number of subjects, uses random selection and random assignment) and controls for threats of all types of validity (including test validity), then valid causal inferences can be determined along with the ability to generalize the causal link. This is truly 

Chapter 2 Between-Subjects Approach

The between-subjects approach, also known as a multiple-group approach, allows a researcher to compare the effects of two or more groups on single or multiple dependent variables (outcome variables). With a minimum of two groups, the participants in each group will only be exposed to one condition (one level of the independent variable), with no crossover between conditions. An advantage of having multiple groups is that it allows for the (a) random assignment to different conditions (experimental research) and (b) comparison of different treatments. If the design includes two or more dependent variables, it can be referred to as a multivariate approach, and when the design includes one dependent variable, it is classified as univariate.

Pretest and Posttest Designs

A common application to experimental and quasi-experimental research is the pretest and posttest between-subjects approach, also referred to as an analysis of covariance design (i.e., the pretest measure is used as the covariate in the analyses because the pretest should be highly correlated with the posttest). The 1-factor pretest and posttest control group design is one of the most common between-subjects approaches with many variations (one factor representing one independent variable and sometimes referred to as a single-factor randomized-group design). This basic multiple-group design can include a control group and is designed to have multiple measures between and within groups. Although there is a within-subject component, the emphasis is on the between-subject variance. The advantage of including pretest measures allows for the researcher to test for group equivalency (i.e., homogeneity between groups) and for providing a baseline against which to compare the treatment effects, which is the within-subject component of the design (i.e., the pretest is designated as the covariate in order to assess the variance [distance between each set of data points] between the pretest and posttest measures).

There is no set rule that determines the number of observations that should be made on the dependent variable. For example, in a basic pretest and posttest control group design, an observation is taken once prior to the treatment and once after the treatment. However, based on theoretical considerations, the investigator can take multiple posttest treatment measures by including a time-series component. Depending on the research logistics, groups can be randomly assigned or matched, then randomly assigned to meet the criteria for experimental research, or groups can be nonrandomly assigned to conditions (quasi-experimental research). With quasi-experimental research, the limitations of the study significantly increase as defined by the threats to internal validity discussed earlier.

k-Factor Designs

The between-subjects approach can include more than one treatment (factor) or intervention (i.e., the independent variable) and does not always have to include a control group. We designate this design as the k-factor design, with or without a control group. Shadish et al. (2002) refer to this design as an alternative- or multiple-treatment design. We prefer the k-factor design as a means to clearly distinguish exactly how many factors are present in the design (i.e., the k represents the number of factors [independent variables]). To clarify, the treatments in a 3-factor model (k = 3), for example, would be designated as XA, XB, and XC (each letter of the alphabet representing a factor) within the design structure. The within-subjects k-factor design is referred to as the crossover design and is discussed in more detail later in this book under repeated-measures approaches.

A between-subjects k-factor design should be used when a researcher wants to examine the effectiveness of more than one type of treatment and a true control is not feasible. Within educational settings, a control group is sometimes not accessible, or there are times when a university’s Institutional Review Board considers the withholding of treatment from specific populations as unethical. Furthermore, some psychologists and educators believe that using another treatment (intervention) as a comparison group will yield more meaningful results, particularly when the types of interventions being studied have a history of proven success; therefore, a k-factor design is the obvious choice. We present a variety of examples of 2-, 3-, and 4-factor pretest and posttest designs, as well as posttest-only designs with and without control groups.

Most common threats to internal validity are related, but not limited, to these designs:

· Experimental. Maturation, Testing, Attrition, History, and Instrumentation

· Quasi-Experimental. Maturation, Testing, Instrumentation, Attrition, History, and Selection Bias

We refer the reader to the following article and book for full explanations regarding threats to validity, grouping, and research designs:

· Shadish, W. R., & Cook, T. D. (2009). The renaissance of field experimentation in evaluating interventions. Annual Review of Psychology, 60, 607–629.

· Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Diagram 2.1 Pretest and Posttest Control Group Design

Note: In regard to design notations, a dashed line (- – -) would separate Groups 1 and 2 in the design structure if the participants were not randomly assigned to conditions, which indicates quasi-experimental research.

Example for Diagram 2.1

Chao, P., Bryan, T., Burstein, K., & Ergul, C. (2006). Family-centered intervention for young children at-risk for language and behavior problems. Early Childhood Education Journal, 34(2), 147–153.

Chapter 3 Regression-Discontinuity Approach

The regression-discontinuity (RD) approach is often referred to as an RD design. RD approaches maintain the same design structure as any basic between-subjects pretest and posttest design. The major differences for the RD approach are (a) the method by which research participants are assigned to conditions and (b) the statistical analyses used to test the effects. Specifically, the researcher applies the RD approach as a means of assigning participants to conditions within the design structure by using a cutoff score (criterion) on a predetermined quantitative measure (usually the dependent variable, but not always). Theoretical and logistical considerations are used to determine the cutoff criterion. The cutoff criterion is considered an advantage over typical random or nonrandom assignment approaches as a means to target “needy” participants and assign them to the actual program or treatment condition.

The most basic design used in RD approaches is the two-group pretest–posttest control group design. However, most designs designated as between-subject approaches can use an RD approach as a method of assignment to conditions and subsequent regression analysis. RD approaches can also be applied using data from extant databases (e.g., Luytena, Tymms, & Jones, 2009) as a means to infer causality without designing a true randomized experiment (see also Lesik, 2006, 2008). As seen in 

Figure 3.1

, the cutoff criterion was 50 (based on a composite rating of 38 to 62). Those who scored below 50 were assigned to the control group, and those who scored above were assigned to the treatment group. As the figure shows, once the posttest scores were collected, a regression line was applied to the model to analyze the pre–post score relationship (i.e., a treatment effect is determined by assessing the degree of change in the regression line in observed and predicted pre–post scores for those who received treatment compared to those who did not).

Some researchers argue that the RD approach does not compromise internal validity to the extent the findings would not be robust to any violations of assumptions (statistically speaking). Typically, an RD approach requires much larger samples as a means to achieve acceptable levels of power (see statistical conclusion validity). We present two examples of studies that employed RD approaches: one that implemented an intervention, and one that used observational data. See Shadish, Cook, and Campbell (2002) for an in-depth discussion of issues related to internal validity for RD approaches, as well as methods for classifying RD approaches as experimental research, quasi-experimental research, and fuzzy regression discontinuity (i.e., assigning participants to conditions in violation of the designated cutoff score).

Figure 3.1 Sample of a Cutoff Score

Most common threats to internal validity are related, but not limited, to these designs:

· Experimental. History, Maturation, and Instrumentation

· Quasi-Experimental. History, Maturation, Instrumentation, and Selection Bias

We refer the reader to the following articles and book chapter for full explanations regarding RD approaches:

· Imbens, G. W., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142, 615–635.

· Trochim, W. (2001). Regression-discontinuity design. In N. J. Smelser, J. D. Wright, & P. B. Baltes (Eds.), International encyclopedia of the social and behavioral sciences (Vol. 19, pp. 12940–12945). North-Holland, Amsterdam: Pergamon.

· Trochim, W., & Cappelleri, J. C. (1992). Cutoff assignment strategies for enhancing randomized clinical trials. Controlled Clinical Trials, 13, 190–212.

Diagram 3.1 Regression-Discontinuity Pretest–Posttest Control Group Design1

Note: OA refers to the preassignment measure, and C refers to the cutoff score.

Example for Diagram 3.1

Bryant, D. P., Bryant, B. R., Gersten, R., Scammacca, N., & Chavez, M. M. (2008). Mathematic intervention for first- and second-grade students with mathematics difficulties: The effects of tier 2 intervention delivered at booster lessons. Remedial and Special Education, 29(1), 20–31.

Chapter 3 Regression-Discontinuity Approach
The regression-discontinuity (RD) approach is often referred to as an RD design. RD approaches maintain the same design structure as any basic between-subjects pretest and posttest design. The major differences for the RD approach are (a) the method by which research participants are assigned to conditions and (b) the statistical analyses used to test the effects. Specifically, the researcher applies the RD approach as a means of assigning participants to conditions within the design structure by using a cutoff score (criterion) on a predetermined quantitative measure (usually the dependent variable, but not always). Theoretical and logistical considerations are used to determine the cutoff criterion. The cutoff criterion is considered an advantage over typical random or nonrandom assignment approaches as a means to target “needy” participants and assign them to the actual program or treatment condition.
The most basic design used in RD approaches is the two-group pretest–posttest control group design. However, most designs designated as between-subject approaches can use an RD approach as a method of assignment to conditions and subsequent regression analysis. RD approaches can also be applied using data from extant databases (e.g., Luytena, Tymms, & Jones, 2009) as a means to infer causality without designing a true randomized experiment (see also Lesik, 2006, 2008). As seen in Figure 3.1, the cutoff criterion was 50 (based on a composite rating of 38 to 62). Those who scored below 50 were assigned to the control group, and those who scored above were assigned to the treatment group. As the figure shows, once the posttest scores were collected, a regression line was applied to the model to analyze the pre–post score relationship (i.e., a treatment effect is determined by assessing the degree of change in the regression line in observed and predicted pre–post scores for those who received treatment compared to those who did not).
Some researchers argue that the RD approach does not compromise internal validity to the extent the findings would not be robust to any violations of assumptions (statistically speaking). Typically, an RD approach requires much larger samples as a means to achieve acceptable levels of power (see statistical conclusion validity). We present two examples of studies that employed RD approaches: one that implemented an intervention, and one that used observational data. See Shadish, Cook, and Campbell (2002) for an in-depth discussion of issues related to internal validity for RD approaches, as well as methods for classifying RD approaches as experimental research, quasi-experimental research, and fuzzy regression discontinuity (i.e., assigning participants to conditions in violation of the designated cutoff score).
Figure 3.1 Sample of a Cutoff Score

Most common threats to internal validity are related, but not limited, to these designs:
· Experimental. History, Maturation, and Instrumentation
· Quasi-Experimental. History, Maturation, Instrumentation, and Selection Bias
We refer the reader to the following articles and book chapter for full explanations regarding RD approaches:
· Imbens, G. W., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142, 615–635.
· Trochim, W. (2001). Regression-discontinuity design. In N. J. Smelser, J. D. Wright, & P. B. Baltes (Eds.), International encyclopedia of the social and behavioral sciences (Vol. 19, pp. 12940–12945). North-Holland, Amsterdam: Pergamon.
· Trochim, W., & Cappelleri, J. C. (1992). Cutoff assignment strategies for enhancing randomized clinical trials. Controlled Clinical Trials, 13, 190–212.
Diagram 3.1 Regression-Discontinuity Pretest–Posttest Control Group Design1

Note: OA refers to the preassignment measure, and C refers to the cutoff score.
Example for Diagram 3.1
Bryant, D. P., Bryant, B. R., Gersten, R., Scammacca, N., & Chavez, M. M. (2008). Mathematic intervention for first- and second-grade students with mathematics difficulties: The effects of tier 2 intervention delivered at booster lessons. Remedial and Special Education, 29(1), 20–31.

Chapter 4 Within-Subjects Approach

Major challenges when conducting research are often related to (a) access to participants and (b) an inability to randomly assign the participants to conditions. With these limitations in mind, researchers often employ a within-subjects approach. Although the pretest and posttest designs of between-subjects approaches include a within-subject component, the objective is not necessarily to test the within-subject variances as intended with within-subject approaches. The within-subjects approach to research assumes one group (or subject) serves in each of the treatment conditions.

This approach is referred to as repeated measures because participants are repeatedly measured across each condition. The advantage to this approach is that it can be used with smaller sample sizes with little or no error variance concerning individual differences between conditions (i.e., the same participants exist in each condition). Some disadvantages to this approach are the threats to internal validity, which are primarily maturation and history, and the biggest issue is sequencing effects (i.e., order and carryover effects). More specifically, performance in one treatment condition affects the performance in a second treatment condition. If possible, it is recommended to randomize the order of the treatments (also known as counterbalancing) to control for sequencing effects.

The simplest within-subjects approach is the one-group with a single pretest and posttest measure (quasi-experimental research 1-factor design), which is presented here. This design can be extended to multiple pretest and posttest measures and is designated as an interrupted time-series (ITS) design and is sometimes called the “time-series” approach. For this guide, we categorize the ITS design under the repeated-measures approach. Traditionally, it was believed that ITS designs should include upward of 100 observations (in regard to statistical power), but many of these designs, when applied, often have anywhere from 10 to 50 observations and are often designated as short ITS designs.

Repeated-Measures Approach

The repeated-measures approach is structured so the researcher can collect numerous measures from the participants. Specifically, designs that include repeated measures allow researchers to gather multiple data points over time to study the rate of change as a function of treatment or time. These types of designs typically are more advanced, which require advanced statistical analysis to summarize the data. Most single-case approaches must use repeated-measures approaches. This approach allows for the single unit of analysis to serve as its own control to minimize treatment effects. Designs that employ repeated-measures approaches are also useful in longitudinal studies when examining trends or phenomena over a designated period of time. There are several designs that use the repeated-measures approach.

It is important to clarify that designs within the repeated-measures approach are classified as experimental as long as participants are randomly exposed to each condition (i.e., counterbalancing must occur because sequencing effects are the biggest threat to internal validity within this approach). However, there are repeated-measures approaches that are considered nonexperimental research. The ITS design is an example of nonexperimental research and is often referred to as a longitudinal data structure because data is collected at varying time points over days, months, or even years. The application of this approach, as with all approaches, is considered along with theoretical tenets and logistical considerations.

Repeated-measures approaches can also include a between-subjects component as seen in the pretest and multiple-posttest design and the switching-replications design (the emphasis is usually on the between- and within-subject variances, which are sometimes not referred to as repeated measures because technically each group is not exposed to each condition). We present one example of the pretest and multiple-posttest design and two examples of a switching-replication design (one experimental and one quasi-experimental). This design allows the researcher to assess the effects of the treatment on the first group while withholding the treatment to the second group. The second group is designated as a wait-list control group. This design includes only one treatment or factor. We also present a similar design, the crossover design (also known as a changeover design), which includes at a minimum two factors, but it can include more (Ryan, 2007; Shadish, Cook, & Campbell, 2002). Some researchers, as seen in the experimental example presented later, refer to a switching-replications design as a crossover design. To be clear, the switching-replications design includes one treatment and a wait-list control group, while the crossover design includes a minimum of two treatments and no control.

Switching-Replications Design: A Primer

The switching-replications design is one of the most effective experimental designs at controlling for threats to internal validity. Perhaps more importantly, it eliminates the need to deny any potentially beneficial intervention to participants due to random assignment (to control group). The design is straightforward: The treatment is replicated (repeated) with each group, with one group receiving the treatment first. In theory, external validity should also be improved through the use of two independent administrations of the same intervention. Treatment environment and condition always vary somewhat over time (outside of a laboratory setup), thus having the treatment replicated at a later time (with the potential of many variations in the treatment application and environment and history) with similar results would demonstrate generalizability.

However, the standard design structure for a switching-replication design should not be chosen if the research can use random assignment to conditions because it is nearly impossible to avoid violating the standard statistical assumptions associated with repeated-measures analysis. Therefore, we propose a variant called the wait-list continuation design when random assignment is available for application. The design includes both a within- and between-subjects component (i.e., mixed-subjects approach). A wait-list control group is incorporated and doesn’t include the pretest for that condition. In effect, each group serves as both treatment and control at different points in time and allows for statistical analysis without relying on statistical assumptions to fall into place naturally (e.g., multivariate normality and sphericity). We provide a mock statistical analysis of this design in 

Appendix E

.

Chapter 5 Factorial Designs

An extension of the k-factor design is the factorial design. The simplest factorial design includes, at a minimum, two factors (i.e., independent variables), each with two levels (Kazdin, 2002; Vogt, 2005). Two factors each with two levels is designated as a 2 × 2 factorial design. Factorial designs are denoted by the form sk. The s represents the number of levels, and k represents the number of factors (e.g., 2 × 2 is the same as 22). Recall that a factor is another term for the independent variable, or treatment, or intervention.

Many k-factor designs can be transformed into factorial designs (based on theoretical and logistical considerations) by partitioning the factors into at least two levels and by subsequently changing the statistical analysis used to examine the data. For example, a researcher is interested in looking at the effects of a math intervention (1: factor) partitioned into two levels (1–visual math, 2–auditory math) and how it differs by gender (2: factor; 1–males, 2–females) on a math competency exam (i.e., dependent variable). Unlike the k-factor design, factorial designs allow for all combinations of the factor levels to be tested on the outcome (i.e., male and female differences for auditory-style teaching compared to male and female differences for visual-style teaching). Thus, factorial designs allow for the examination of both the interaction effect (the influence of one independent variable on the other independent variable) and the maineffects (the influence of each independent variable on the outcome).

We must caution the social and behavioral science researcher not to get overzealous with the application of more complex factorial designs outside of the 2 x 2 design. A general assumption related to the factorial design is that there is no interaction between the factors, but this is typically impossible when including humans as test subjects. The factorial design was originally developed for agricultural and engineering research where the variables are static (e.g., amount of fertilizer or blade length) and does not suffer from the typical threats to internal validity that occurs when human participants are the test subjects (e.g., testing, sequencing effects).

The factorial design is not one design; rather, it is considered a family of designs. For example, some research requires that the number of levels for each factor is not the same. The simplest version would be a 2 × 3 design (i.e., one independent variable has two levels and the other has three). Factorial designs can also include three factors (e.g., 2 × 2 × 2 represents three independent variables, each with two levels). Factorial designs can use within-subjects or between-subjects approaches, and they can include pretest and posttest or posttest-only measures (most contain only posttests). The within-subjects approach to factorial designs is set up so there is one group, and each participant serves in each of the treatment conditions. The between-subjects approach allows the researcher to test multiple groups across conditions without exposing each participant to all treatment conditions. This approach requires larger sample sizes, and random assignment is highly recommended to control for differentiation and selection bias.

Another option to the factorial design is the mixed-subjects approach. A mixed-factorial design includes both a within- and between-subjects approach. For instance, a 2 × 3 mixed-factorial design would be constructed so the first factor at two levels is tested as within subjects, and the second factor at three levels would be tested as between subjects. To determine the number of groups (also referred to as cells), the number of levels for each factor can be multiplied (e.g., 2 × 2 = 4 groups; 2 × 3 = 6 groups). The strength of this design is that it allows a researcher to examine the individual and combined effects of the variables. There are many types and variations of factorial designs not illustrated in this reference guide.

We provide three examples of a 2-factor between-subjects factorial design (a pretest and posttest design [2 × 2] and two posttest-only designs [2 × 2 and 3 × 2]) and one example of a 2 × 2 within-subjects factorial design. We also provide two examples of a between-subjects factorial design with three factors (2 × 2 × 2 and 2 × 3 × 2) and one example of a mixed-factorial design (2 × 2 × 2).

Factorial designs that include within-subjects components are also affected by the threats to internal validity listed under the repeated-measures approach (e.g., sequencing effects).

Most common threats to internal validity are related, but not limited, to these designs:

· Experimental. Maturation, Testing, Diffusion, and Instrumentation

· Quasi-Experimental. Maturation, Testing, Instrumentation, Diffusion, and Selection Bias

We refer the reader to the following article and book for full explanations regarding factorial designs:

· Dasgupta, T., Pillai, N. S., & Rubin, D. B. (2014). Causal inference from 2K factorial designs using potential outcomes. Journal of the Statistical Society: Series B (Statistical Methodology), 77(4), 727–753.

· Ryan, T. P. (2007). Modern experimental design. Hoboken, NJ: Wiley.

Diagram 5.1 2 × 2 Factorial Pretest and Posttest Design (Between-Subjects)

Example for Diagram 5.1

Stern, S. E., Mullennix, J. W., & Wilson, S. J. (2002). Effects of perceived disability on persuasiveness of computer-synthesized speech. Journal of Applied Psychology, 87(2), 411–417.

Research Question (main and interaction effects): What are the effects of perceived disabilities on the persuasiveness of computer-synthesized and normal speech?

Procedures: Participants completed an attitude pretest and were randomly assigned to watch an actor deliver a persuasive speech under one of the following four conditions: (a) disabled using normal speech, (b) nondisabled using normal speech, (c) disabled using computer-synthesized speech, or (d) nondisabled using computer-synthesized speech. Participants then completed an attitude posttest survey. Additionally, the following 

Chapter 6 Solomon N-Group Design

The Solomon four-group design (Solomon, 1949) was developed specifically to combine the strengths of both types of between-subjects approaches (pretest only and the pretest and posttest design) as a means to minimize the weaknesses associated with using only one type. As a result, most of the major threats to internal validity (e.g., testing) and construct validity (e.g., pretest sensitization) are minimized. The inclusion of a control (or comparison) group to a research design can strengthen the internal validity and the overall validity of the findings. However, as noted earlier, there are strengths and costs in using between-subjects pretest and posttest control group designs compared to that of between-subjects posttest-only control group designs. The Solomon four-group design is an extension of the factorial design and is considered one of the strongest experimental designs, but its application in the social sciences is uncommon. Many investigators believe that logistical considerations (e.g., time, costs, number of participants, statistical analysis) are too much to overcome when applying this design. Although Solomon’s original work did not include a sound statistical analysis for this design, researchers have attempted to offer statistical solutions and recommendations for power analysis (Sawilowsky, Kelley, Blair, & Markman, 1994; Walton Braver & Braver, 1988).

Originally, the Solomon four-group design was developed to include only four groups. Specifically, the four-group design includes one treatment (or factor; k = 1), with Group 1 receiving the treatment with a pretest and posttest, Group 2 receiving the pretest and posttest with no treatment, Group 3 receiving the treatment and only a posttest, and finally Group 4 receiving only the posttest. This allows the researcher to assess the main effects, as well as interaction effects between the pretest and no-pretest conditions. However, it has been proposed that the original design can include more than one treatment, thus extending the design to six groups for 2-factor models or eight groups for 3-factor models (i.e., Solomon N-group design). These designs allow researchers to test the effects of more than one type of treatment intervention against one another. Therefore, the design can be referred to as a Solomon four-, six-, or eight-group design. We present examples of research that used a Solomon four-group design (k = 1), one example of a six-group design (k = 2), and one example of an eight-group design (k = 3).

Most common threats to internal validity are related, but not limited, to these designs:

· Experimental. This design controls for all threats to internal validity except for Instrumentation.

· Quasi-Experimental. Instrumentation and Selection Bias

We refer the reader to the following article for full explanations and recommended analyses regarding Solomon four-, six-, and eight-group designs:

· Steyn, R. (2009). Re-designing the Solomon four-group: Can we improve on this exemplary model? Design Principles and Practices: An International Journal, 3(1), 1833–1874.

Diagram 6.1 Solomon Four-Group Design

Note: It is highly recommended that random assignment be used when applying the Solomon N-group designs.

Example for Diagram 6.1

Probst, T. M. (2003). Exploring employee outcomes of organizational restructuring. Group & Organization Management, 28(3), 416–439.

Research Questions

· Main effect: Does job security, job satisfaction, commitment, physical and mental health decline, and turnover intention increase following the announcement and implementation of organizational restructuring? Do individuals who are affected by organizational restructuring report lower levels of job security, less job satisfaction, more negative affective reactions, greater intentions to quit, lower levels of physical and mental health, higher levels of role ambiguity, and higher levels of time pressure than individuals not affected by organizational restructuring?

· Interaction effect: The authors of this study did not explore interaction effects. A 2 × 2 factorial design would allow for the examination of the interactions within a Solomon four-group design. In this study, each independent variable has two levels (treatment and no-treatment; pretest and no-pretest). See the chart that follows for an example of a 2 × 2 factorial design for this study.

Procedures: A stratified random sample of 500 employees from five state agencies going through reorganization was selected. The stratification was based on whether the employee was affected by the reorganization. A total of 313 employees (63% of the sample) participated in the study. The sample was divided into two groups: those affected by the reorganization (n = 147) and those unaffected by the reorganization (n = 166). In addition, all participants were randomly assigned to either a pretest (n = 126) or no pretest (n = 187) group. Data were collected at two different time points: (a) immediately prior to the workplace reorganization announcement and (b) 6 months following the merger announcement. There were four different groups of participants: (a) pretested and affected by the reorganization, (b) pretested but unaffected by the reorganization, (c) affected but not pretested, and (d) unaffected and pretested. A survey assessing each of the variables (see Research Questions) of interest was administered prior to the merger announcement and 6 months into the reorganization.

Design: Experimental research using a between-subjects approach and a Solomon four-group design

Recommended Parametric Analysis: 2-way factorial ANOVA or maximum likelihood regression (appropriate descriptive statistics and effect-size calculations should be included)

Chapter 7 Single-Case Approach

The single-case approach is often referred to as the single-participant or single-subject design. In addition, some single-case approaches use more than one participant (N = 1) and are referred to as small-n designs, but the emphasis and unit of analysis remain on the single subject as reporting guidelines are regularly updated and produced (see Tate et al., in preparation). We remain consistent with our terminology and refer to these as single-case approaches and reserve the word design for the specific type of design defined within the approach. A single-case approach is used to demonstrate a form of experimental control with one participant (in some instances more than one participant). As seen in within-subject and between-subject approaches, the major contingencies required to qualify as a “true” experiment are randomization of conditions to participants (i.e., counterbalancing) or random assignment of participants to conditions. However, in single-case approaches, the participant serves as his or her own control, as well as serving in the treatment during which repeated measures are taken. More specifically, each condition is held constant and the independent variable is systematically withheld and reintroduced at various intervals as a means to study the outcome. The interval between the variable being withheld and reintroduced is based on theoretical and logistical considerations. A rule of thumb may be to consider equal intervals; however, there may be conditions that require washout periods, creating unequal intervals.

As a reminder, the treatment is also the same as a factor or intervention, and it is the independent variable. Although there are still debates concerning the number of experimental replications required to determine causation, as well as issues related to power, single-case approaches take a unique approach to experimentation. The threats to internal validity associated with the single-case approach are similar to those found in the within-subjects approach (e.g., sequencing effects), primarily because of the issues related to collecting repeated measures. In most cases, this approach meets the critical characteristics of experimental control (see Manolov, Solanas, Bulté, & Onghena, 2010, and Shadish, 2014a, for a review of robustness and power of randomization tests in A-B-A-B designs).

There are many forms, variations, and names of research designs for single-case approaches. We discuss four of the major designs here, with the understanding that this is not a comprehensive coverage of all the designs developed within this approach. The primary goal of the single-case approach is to measure the dependent variable and at the very minimum measure it against the presence and absence of the independent variable (treatment or intervention). Therefore, the design logic of a single-case approach starts with the baseline, which is designated as A, and then the treatment is designated as B. See 

Table 7.1

 for the explanation of design notations that are unique to single-case approaches.

The most basic design within this approach is the A-B design (i.e., the dependent variable is measured during the baseline and then again during the treatment). Most single-case approach designs represent some variation and extension of the A-B design. It is important to note that, in order to qualify as an experiment, a researcher would, at a minimum, need to employ an A-B-A design (i.e., this is to establish that there is indeed a functional relationship between the independent and dependent variables). There are many other variations of this design structure such as A-B-A-B, B-A-B, or A-B-C-A (C is used to represent a second treatment or independent variable). Any variation of the A-B design can be employed based solely on theoretical and logistical considerations.

When a researcher wants to study more than one treatment at a time, a multi-element design (also referred to as multitreatment or alternating-treatment designs) can be employed. This design requires rapid shifts between or within treatments to establish experimental control, and it allows an investigator to research two or more treatments (sometimes up to five or six). The third type of design within this approach is the multiple baseline design. While the A-B and multi-element require a withdrawal or reversal of conditions, the multiple baseline design requires no withdrawal or reversal (i.e., some treatments have carryover effects, so withdrawal or reversal is not theoretically appropriate). Specifically, two or more baselines are established, and the intervention is introduced at various points (usually across participants), but it is never removed. Most multiple baseline designs include more than one participant, but they may be used on a single participant applying the multiple baselines across multiple behaviors (as measured by the dependent variables). As previously noted, many of the single-case approach applications include more than one participant; however, each participant is analyzed individually.

Finally, there is a changing criterion design. Similar to the multiple baseline design, the changing criterion design allows for a gradual systematic manipulation of a targeted outcome and does not require a reversal or return to baseline phase as in the A-B design. This design is best applied when the researcher is interested in observing the stepwise increases of the targeted behavior.

We included three examples of the A-B design. Specifically, an A-B-A design, an A-B-A-B design, and an A-B-A-B-C-B-C design are presented. We also introduce one example of a changing criterion design and two multiple baseline designs (a 1-factor and a 2-factor design), which are forms of the basic A-B design.

The reader is referred to Dixon et al. (2009) to learn how to create graphs in Microsoft Excel for designs within the single-case approach. We also refer the reader to Shadish (2014a) for a review of a the most recent issues regarding the analysis of the single-case approach such as modeling trend, modeling error covariances, computing standardized effect size estimates and assessing statistical power. In addition, we recommend the following article and book for a comprehensive overview of the single-case approach, specific forms of analysis for this approach, and software designed to analyze data from the family of A-B designs:

· Gast, D. L., & Ledford, J. R. (2014). Single case research methodology: Applications in special education and behavioral sciences (2nd ed.). London, England: Routledge.

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP