09 Dec Review article and use template to demonstrate each of the four experimental designs: reversal, multiple baseline, changing criterion, and alternating treatm
Review article and use template to demonstrate each of the four experimental designs: reversal, multiple baseline, changing criterion, and alternating treatment and discuss strengths and limitations and well as all forms of validity for each experimental design.
Experimental Design and Validity
Reversal Design
From the articles in the article bank provided by your instructor, choose one that demonstrates reversal design and complete the following.
APA citation |
Full APA citation here. |
Strengths |
1. Strength of reversal design 2. Another strength of reversal design |
Limitations |
1. Limitation of reversal design 2. Another limitation of reversal design |
External Validity |
First explain what external validity is. Then explain how external validity was present or absent with support from the article. |
Internal Validity |
First explain what internal validity is. Then explain how internal validity was present or absent with support from the article. |
Social Validity |
First explain what social validity is. Then explain how social validity was present or absent with support from the article. |
Multiple Baseline Design
From the articles in the article bank provided by your instructor, choose one that demonstrates multiple baseline design and complete the following.
APA citation |
Full APA citation here. |
Strengths |
1. Strength of multiple baseline design 2. Another strength of multiple baseline design |
Limitations |
1. Limitation of multiple baseline design 2. Another limitation of multiple baseline design |
External Validity |
First explain what external validity is. Then explain how external validity was present or absent with support from the article. |
Internal Validity |
First explain what internal validity is. Then explain how internal validity was present or absent with support from the article. |
Social Validity |
First explain what social validity is. Then explain how social validity was present or absent with support from the article. |
Changing Criterion Design
From the articles in the article bank provided by your instructor, choose one that demonstrates changing criterion design and complete the following.
APA citation |
Full APA citation here. |
Strengths |
1. Strength of changing criterion design 2. Another strength of changing criterion design |
Limitations |
1. Limitation of changing criterion design 2. Another limitation of changing criterion design |
External Validity |
First explain what external validity is. Then explain how external validity was present or absent with support from the article. |
Internal Validity |
First explain what internal validity is. Then explain how internal validity was present or absent with support from the article. |
Social Validity |
First explain what social validity is. Then explain how social validity was present or absent with support from the article. |
Alternating Treatment Design
From the articles in the article bank provided by your instructor, choose one that demonstrates alternating treatment design and complete the following.
APA citation |
Full APA citation here. |
Strengths |
1. Strength of alternating treatment design 2. Another strength of alternating treatment design |
Limitations |
1. Limitation of alternating treatment design 2. Another limitation of alternating treatment design |
External Validity |
First explain what external validity is. Then explain how external validity was present or absent with support from the article. |
Internal Validity |
First explain what internal validity is. Then explain how internal validity was present or absent with support from the article. |
Social Validity |
First explain what social validity is. Then explain how social validity was present or absent with support from the article. |
,
RE S EARCH ART I C L E
A systematic review of social-validity assessments in the Journal of Applied Behavior Analysis: 2010–2020
Erin S. Leif | Nadine Kelenc-Gasior | Bradley S. Bloomfield | Brett Furlonger |
Russell A. Fox
Faculty of Education, Monash University, Clayton, Victoria, Australia
Correspondence Erin S. Leif, Faculty of Education, Monash University, 19 Ancora Imparo Way, Clayton VIC 3131, Australia. Email: [email protected]
Editor-in-Chief: John Borrero Handling Editor: Timothy Vollmer
Abstract We conducted a systematic review of studies published in the Journal of Applied Behavior Analysis between 2010 and 2020 to identify reports of social validity. A total of 160 studies (17.60%) published during this time included a measure of social validity. For each study, we extracted data on (a) the dimensions of social validity, (b) the methods used for collecting social-validity data, (c) the respon- dents, and (d) when social-validity data were collected. Most social-validity assessments measured the acceptability of intervention procedures and outcomes, with fewer evaluating goals. The most common method for collecting social valid- ity data was Likert-type rating scales, followed by non-Likert-type questionnaires. In most studies, the direct recipients of the intervention provided feedback on social validity. Social-validity assessment data were often collected at the conclusion of the study. We provide examples of social-validity measurement methods, discuss their strengths and limitations, and provide recommendations for improving the future collection and reporting of social-validity data.
KEYWORDS consumer satisfaction, intervention acceptability, intervention preference, social validity
Social validity is defined as a consumer’s satisfaction with the goals, procedures, and outcomes of intervention pro- grams (Wolf, 1978). Social-validity assessments of behavior-analytic interventions provide participants and relevant stakeholders with the opportunity to give feed- back and express their satisfaction with these three dimensions (Wolf, 1978). These assessments may also allow individuals to express their preferences for interven- tions, which might enhance participation and outcomes (Hanley, 2010). One of the criticisms, however, of pub- lished research on behavior-analytic interventions has been the lack of social-validity measurement, as studies have instead predominantly focused on the efficacy and effectiveness of interventions and practices (Callahan et al., 2017; Carr et al., 1999; Ferguson et al., 2019; Huntington et al., 2023). There have been recent calls to improve the collection and reporting of information about the degree to which the direct recipients of behavior- analytic interventions view the procedures used as part of
these interventions as acceptable and preferred and the outcomes meaningful (Common & Lane, 2017).
Wolf (1978) noted that the construct of social valid- ity consists of three dimensions: (a) the goals of the intervention, or what behaviors the intervention is intended to change; (b) the procedures used during interven- tion; and (c) the degree to which intervention effects are meaningful and desirable, including those intended and unpredicted. This conceptualization has been the primary guide for the development of social-validity assessment methods in the behavior-analytic research literature. Social validity may be a critical variable in addressing the research- to-practice gap, as interventions deemed impractical, unacceptable, or harmful may not be adopted or applied in real-world settings (Kazdin, 1977; Kern & Manz, 2004; Leko, 2014; Lloyd & Heubusch, 1996). Assessing the social validity of behavior-analytic interventions may also support the sustainable implementation of evidence-based interventions at a larger scale (Cook et al., 2013; Reimers
Received: 4 October 2023 Accepted: 13 May 2024
DOI: 10.1002/jaba.1092
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2024 The Author(s). Journal of Applied Behavior Analysis published by Wiley Periodicals LLC on behalf of Society for the Experimental Analysis of Behavior (SEAB).
542 J Appl Behav Anal. 2024;57:542–559.wileyonlinelibrary.com/journal/jaba
et al., 1987) and prevent the development and distribution of interventions that are likely to be rejected by consumers and the public (Schwartz & Baer, 1991).
Carr et al. (1999) reviewed research published in the Journal of Applied Behavior Analysis (JABA) from 1968 to 1998 to identify the prevalence of social-validity mea- sures. Two dimensions of social validity were assessed for each study, intervention acceptability and intervention outcomes. On average, during this 31-year period, mea- sures of social validity related to intervention acceptabil- ity and outcomes were reported in only 13% of published studies. Carr et al. expressed concerns that failure to report the outcomes of social-validity assessments may prevent researchers and practitioners from identifying the reasons that behavior-analytic interventions may be rejected or discontinued by consumers. Additionally, Carr et al. noted that failure to report the methods used to gather social-validity data from various consumers may prevent the development, refinement, and uptake of these methods.
The methods used by Carr et al. (1999) were replicated and extended by Ferguson et al. (2019) who identified the prevalence and type of social-validity assessments published in JABA between 1999 and 2016. Across this 17-year period, only 12% of studies included a social-validity mea- sure. The social validity of the intervention procedures and outcomes were more likely to be reported than the social validity of intervention goals. The authors noted that most studies used a combination of rating scales, questionnaires, and intervention choice to collect social-validity data. The authors also reported that “other” forms of social-validity measurement were used in 8% of studies, but they did not provide examples of what these types of measurement involved.
Other researchers have explored the prevalence and type of social-validity assessment data published across a range of journals. Snodgrass et al. (2018) systematically reviewed reports of social validity published in six special education journals. All single-case research design studies published in these six journals between 2005 and 2018 were reviewed, with 26.8% (n = 115) reporting results of a social-validity assessment. Of these 115 studies, 28 measured the social validity of the goals, procedures, and outcomes of the intervention. For these 28 studies, ques- tionnaires were the most common method for collecting data (n = 20), the direct recipients of the intervention most often provided data on social validity (n = 19), and most social-validity assessments were administered at or after the intervention concluded (n = 27). However, one limitation of Snodgrass et al. was that the authors limited their assessment of the methods, respondents, and times to only those 28 studies that measured all three dimen- sions of social validity. Additionally, the authors did not include JABA in their sample of journals.
Most recently, Huntington et al. (2023) assessed social validity across eight behavior-analytic journals between 2010 and 2020, including JABA. Huntington et al. found
47% of studies included in their review reported a measure of social validity, with a large increase evident in 2019 and 2020. The authors highlighted the need for future research to identify and describe methods used to collect social- validity data, the participants who provide social-validity data, and timing of social-validity assessments in behavior- analytic journals. The collection and reporting of these data might provide a clearer picture of how social validity has been measured in studies published in JABA, assist in the evaluation of the quality of the data collected, and provide new insights into how to potentially improve the future assessment of social validity. To this end, our purpose was to systematically identify and appraise social-validity assess- ments included in studies published in JABA between 2010 and 2020. For the studies included in this review, we sought to identify (a) the dimensions of social validity assessed, (b) the types of methods used to collect social-validity data, (c) the individuals who provided social-validity data (the respondents), and (d) the point at which social-validity assessments were conducted. We provide illustrative exam- ples of different ways to measure social validity and discuss the strengths and potential limitations of different social- validity assessments. Based on these data and examples, we provide recommendations for potentially improving the col- lection and reporting of social-validity data in behavior- analytic research.
METHOD
A systematic literature review was undertaken to iden- tify studies for inclusion in this report. Figure 1 includes a diagram of the study screening process. Rather than conducting a keyword search of terms related to social validity in various databases, the iden- tification of relevant peer-reviewed studies for inclu- sion in this review was undertaken by compiling and systematically screening all studies published in JABA from 2010 (Volume 43[1]) to 2020 (Volume 53[4]). All studies were downloaded directly from the journal’s website and independently reviewed. A total of 1,059 studies was published in JABA between 2010 and 2020. The search focused on studies published from 2010 onward to allow us to systematically replicate and extend the procedures described by Carr et al. (1999) and Ferguson et al. (2019) within a more recent 10-year period. Additionally, as the purpose of the cur- rent review was to provide a more in-depth analysis of the characteristics of social-validity assessments pub- lished in JABA, studies published in other journals were not included in the analysis.
Initial study screening procedure
To be included in the current review, the study needed to include at least one human or nonhuman participant.
SYSTEMATIC REVIEW OF SOCIAL VALIDITY 543
19383703, 2024, 3, D ow
nloaded from https://onlinelibrary.w
iley.com /doi/10.1002/jaba.1092 by C
apella U niversity, W
iley O nline L
ibrary on [20/09/2024]. See the T erm
s and C onditions (https://onlinelibrary.w
iley.com /term
s-and-conditions) on W iley O
nline L ibrary for rules of use; O
A articles are governed by the applicable C
reative C om
m ons L
icense
The following were excluded during the initial screening process: technical reports, systematic reviews, meta- analyses, brief reviews, book reviews, errata, announce- ments, surveys, issue information, acknowledgments, and reanalyses of previously published data sets. The methods and results sections of all 1,059 studies were examined to determine which studies fulfilled this inclusion criterion. This resulted in the exclusion of 177 studies that did not include at least one human or nonhuman participant.
Inclusion and exclusion criteria
The remaining 882 studies were reviewed a second time for the presence or absence of at least one measure of social validity. First, the following terms were typed into the elec- tronic search bar of the downloaded PDF version of each study: social validity, social validation, social acceptability, intervention validity, intervention acceptability, consumer satisfaction, satisfaction survey, interview, preference, or choice. If this search returned a result, the study was reviewed to locate any social-validity measure. If this search did not yield any results, the methods, results, and discus- sion section of the study were reviewed in full to determine whether a social-validity measure was included. If the study did not include a measure of social validity, it was excluded.
A study was included if it reported any qualitative or quantitative data measuring the social significance of the intervention goals, procedures, or outcomes (Wolf, 1978) or if it included a measure of intervention preference (Hanley, 2010). All studies that included one or more measures of social validity and reported the outcomes of the assessment were retained. Of the 882 reviewed studies, 160 studies reported one or more measures of social validity.
Dependent measures
Data were extracted for each of the 160 studies that included a measure of social validity for the following categories (and category variables): (a) the authors, (b) the year of publication, (c) the dimension of social validity measured (goals, procedures, or outcomes), (d) the specific method that was used to collect social-validity data (e.g., Likert-type rating scales, questionnaires, or inter- views), (e) the person who provided the social-validity data (e.g., parents, teachers, or participants), and (f) the specific point(s) at which the social-validity data were collected (e.g., before, during, or after intervention). The data col- lected as part of this study can be found in the Additional Supporting Information in the online version of this article at the publisher’s website.
F I GURE 1 Flow diagram of the study screening process.
544 LEIF ET AL.
19383703, 2024, 3, D ow
nloaded from https://onlinelibrary.w
iley.com /doi/10.1002/jaba.1092 by C
apella U niversity, W
iley O nline L
ibrary on [20/09/2024]. See the T erm
s and C onditions (https://onlinelibrary.w
iley.com /term
s-and-conditions) on W iley O
nline L ibrary for rules of use; O
A articles are governed by the applicable C
reative C om
m ons L
icense
Dimensions of social validity
Table 1 provides a definition of each dimension of social validity assessed in the current review. A study was scored as reporting a measure of the social validity of the interven- tion goals if formal measures were employed to assess consumer acceptance of or agreement with the purpose or purported goals of the intervention and the behaviors targeted for change as part of the intervention. A study was scored as reporting a measure of the social validity of the intervention procedures if formal measures were employed to assess consumer acceptance of, agreement with, or preference for the tactics used to deliver the inter- vention or to assess the consumer’s willingness to continue with intervention. A study was scored as reporting an assess- ment of the social validity of the intervention outcomes if
formal measures were used to assess consumer satisfaction with, social importance of, or practical significance of the intervention effects.
Social-validity assessment methods
Table 2 provides a definition of each method of social- validity assessment included in the current review. Social- validity assessment methods were defined as the specific procedures used to collect data on measures of each dimen- sion of social validity. Social-validity assessment methods included (a) Likert-type rating scales, (b) non-Likert-type questionnaires, (c) direct observations, (d) intervention pref- erence or choice questions, (e) concurrent-chains interven- tion preference assessments, or (f) interviews.
TABLE 1 Dimensions of social validity assessed (adapted from Wolf, 1978).
Dimension Definition Total number of studies Percentage
Intervention goals Acceptance of or agreement with the purpose or purported goals of the intervention and the behaviors targeted for change (Are the specific behaviors selected for change and the reasons for behavior change important and valued?)
26 16.25%
Intervention procedures Acceptance of, agreement with, or preference for the strategies and tactics used to deliver the intervention or willingness to continue with intervention (Are the specific intervention strategies used acceptable and preferred?)
144 90%
Intervention outcome Satisfaction with, social importance of, or practical significance of the intervention effects (Are the outcomes associated with the intervention meaningful, including any unexpected outcomes?)
110 68.75%
TABLE 2 Social-validity assessment methods (adapted from Carter & Wheeler, 2019).
Methods Definition Total number of studies Percentage
Likert-type rating scales
A scale that consists of a series of statements or items related to the goals of an intervention, intervention procedures, or outcomes of an intervention for which respondents are asked to indicate their level of agreement or disagreement with each statement. The scale typically ranges from “Strongly Disagree” to “Strongly Agree,” with several intermediate response options
129 80.63%
Non-Likert-type questionnaires
A survey or assessment tool that does not use the traditional Likert-type scale format for collecting responses. Questionnaires might include closed-ended response options, including multiple-choice or yes/no questions; visual- analogue scales; or open-ended questions about the intervention
53 33.13%
Direct observations In vivo or video-based observations in which observers watch intervention sessions and then provide feedback on the intervention, often using Likert- type rating scales or non-Likert-type questionnaires
41 25.63%
Intervention preference or choice
Opportunities for people who are directly involved in the intervention (as recipients or interventionists) to provide feedback on which intervention they prefer or will continue to use following the study. However, the respondent does not experience the intervention after indicating their preference or choice
17 10.63%
Concurrent-chains intervention preference assessments
Opportunities for people who are directly involved in the intervention (as recipients) to choose from available interventions by selecting a discriminative stimulus associated with that intervention and then experiencing their selected intervention following their selection
15 9.38%
Interviews A conversation facilitated by an interviewer who asks the respondent a range of questions to collect information about their opinion of, satisfaction with, or preference for the interventions’ goals, procedures, and outcomes
5 3.13%
SYSTEMATIC REVIEW OF SOCIAL VALIDITY 545
19383703, 2024, 3, D ow
nloaded from https://onlinelibrary.w
iley.com /doi/10.1002/jaba.1092 by C
apella U niversity, W
iley O nline L
ibrary on [20/09/2024]. See the T erm
s and C onditions (https://onlinelibrary.w
iley.com /term
s-and-conditions) on W iley O
nline L ibrary for rules of use; O
A articles are governed by the applicable C
reative C om
m ons L
icense
Respondents
Table 3 provides a definition of each group of social- validity assessment respondents included in the current review. Respondents we
Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.
About Wridemy
We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.
How It Works
To make an Order you only need to click on “Place Order” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Are there Discounts?
All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.