0% found this document useful (0 votes)
22 views

Jessel Et Al. - 2021 - On The Efficiency and Control of Different Functional Analysis Formats

Uploaded by

Mariana Cota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Jessel Et Al. - 2021 - On The Efficiency and Control of Different Functional Analysis Formats

Uploaded by

Mariana Cota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Educ. Treat. Child.

https://doi.org/10.1007/s43494-021-00059-x

REVIEW

On the Efficiency and Control of Different Functional


Analysis Formats
Joshua Jessel · Gregory P. Hanley ·
Mahshid Ghaemmaghami · Matthew J. Carbone

Accepted: 31 August 2021


© Association for Behavior Analysis International 2021

Abstract Jessel et al. (Behavior Analysis & Prac- Keywords Efficiency · Functional analysis ·
tice, 13(1), 205–216, 2020a) conducted a review of IISCA · Problem behavior · Procedural components
the functional analysis literature between the years
1965 and 2016. The authors found an increasing trend
in analyses replicating the components of Iwata et al. Functional analysis is a procedure often used dur-
(Journal of Applied Behavior Analysis, 27, 197–209, ing an assessment period to (1) identify the environ-
1982/1994a) and determined that those components mental variables controlling problem behavior and
have come to represent a standard format. Since the (2) inform the subsequent design of a function-based
inception of the standard, other functional analysis treatment (Hanley et al., 2003). Although the term
formats (e.g., brief, latency-based, trial-based) have functional analysis need only refer to the application
been developed to improve analysis efficiency and of this general process, applied researchers may adopt
control, with most retaining fundamental components standardized, empirically supported practices, creat-
of the standard and some omitting all (e.g., interview- ing comprehensive programs of our technology that
informed, synthesized contingency analysis; IISCA). are easy to disseminate and replicate (Austin & Carr,
We conducted a review of functional analyses in this 2000; Azrin, 1977).
two-part evaluation to determine the levels of effi- In a review of functional analyses published
ciency and control afforded by functional analysis between 1965 and 2016, Jessel et al. (2020a)
variations and formats. attempted to identify the development of a standard-
ized assessment model by tracking the prevalence
of the following five components of the Iwata et al.
(1982/1994a) procedures: (1) multiple test conditions
Supplementary Information The online version
included in a single analysis, (2) isolated reinforce-
contains supplementary material available at https://​doi.​ ment contingencies assessed in separate test condi-
org/​10.​1007/​s43494-​021-​00059-x. tions, (3) procedural uniformity among analyses con-
ducted across participants, (4) a single play (control)
J. Jessel (*)
condition compared to all test conditions, and (5)
Queens College, 65‑30 Kissena Boulevard, Queens,
NY 11367, USA minimal response classes targeting only dangerous
e-mail: [email protected] problem behavior. The authors found a strong trend
toward the incorporation of these components among
G. P. Hanley · M. Ghaemmaghami · M. J. Carbone
applied researchers, reporting that 84% of analyses
Western New England University, 1215 Wilbraham Rd,
Springfield, MA 01119, USA published between 2001 and 2010 included four or

Vol.:(0123456789)
Educ. Treat. Child.

all five of those components. These results led the where practitioners have limited time with clients. At
researchers to identify the Iwata et al. format as the the time of Northup et al.’s analysis, the standard pro-
standard functional analysis. cedures required an average of thirty 15-min sessions
Although applied researchers are guided by par- (450 min), whereas the brief analyses required an
ticular empirical questions, in practice, functional average of four 5-min sessions (20 min). In essence,
analyses are implemented to achieve some under- the brief analysis yielded a 96%1 improvement in the
standing of problem behavior in an efficient and safe speed with which an analysis was completed. The
manner. The duration of an analysis (i.e., efficiency) brief format has been replicated multiple times (e.g.,
and control over problem behavior during this time Bellone et al., 2014; Derby et al., 1994; LeGray et al.,
may be especially important to a practitioner’s deci- 2010) and has been applied most commonly in out-
sion to conduct a functional analysis because lack of patient settings where brevity is essential because of
time and risk of harm have often been reported to be time limitations (Derby et al., 1992).
barriers to its use (Oliver et al., 2015; Roscoe et al., Researchers have also improved the efficiency of
2015). Although practitioners have multiple func- functional analyses by reducing the duration of indi-
tional analysis formats from which to choose (e.g., vidual sessions while retaining within-subject replica-
Hanley et al., 2014; Iwata et al., 1982/1994a; Northup tion across multiple sessions of the same conditions.
et al., 1991; Sigafoos & Saggers, 1995; Thomason- For instance, Wallace and Iwata (1999) recalculated
Sassi et al., 2011), they might be reluctant to use the rates of problem behavior for 46 standard func-
assessments that require extended amounts of time tional analyses within the first 5 min, first 10 min,
and sustained periods of increased problem behavior. and the entire 15 min of each session. Agreement
Practitioner concerns, combined with ethical man- was observed between the 5 and 15 min sessions in
date for rigorous pretreatment assessment (Behavior over 90% of the analyses, suggesting session duration
Analyst Certification Board, 2014), have led some could be minimized without affecting the demonstra-
researchers to focus on practical aspects of functional tion of functional control. This reduction in the ses-
analyses, such as the efficiency of the analysis (see sion duration improved efficiency by 67%.
Hanley, 2012, for review) and interpretations of con- Sigafoos and Saggers (1995) described an alterna-
trol (e.g., Fisher et al., 2003; Hagopian et al., 1997; tive format that affected efficiency by embedding a
Jessel et al., 2020b, 2020c). trial-based presentation of conditions into the class-
Efficiency may be understood as the time required room setting. In their trial-based functional analysis,
to convincingly demonstrate a functional relation the researchers separated each trial into two possible
in an analysis (Jessel et al., 2016), such that analy- segments: test and control. The trial began with a test
ses that require less time to conduct are considered segment in which the researcher arranged a putative
more efficient (Querim et al., 2013; Saini et al., 2018, establishing operation. If at any point problem behav-
2020). Efficiency is not a new consideration when ior occurred, or 1 min elapsed, the control segment
conducting a functional analysis (see Northup et al., was introduced and included access to the reinforce-
1991, and Vollmer et al., 1995), but although referred ment matching the preceding establishing operation
to often, it is rarely measured or reported. Despite (i.e., if attention was withheld in the test segment,
the lack of focus on measuring the efficiency of func- attention was provided in the control segment). Each
tional analyses, researchers have designed multiple test condition (attention, tangible, escape) was repeat-
methods for achieving an efficient demonstration of edly compared to its own matched control within 1- to
control. 2-min trials. In particular, the researchers conducted
Northup et al. (1991) described a model that was 20 trials for each condition, which means analyses
termed a brief functional analysis and, except for could have lasted anywhere between 60 and 120 min,
arranging only one or two sessions per condition, depending on the number of putative reinforcers
was methodologically identical to that of the stand-
ard functional analysis format. The elimination of
repeated sessions was intended to reduce the total 1
Querim et al. (2013) calculated efficiency by dividing the
duration of the functional analysis, creating a more total duration of the briefer analysis by the total duration of the
efficient assessment process for an outpatient setting longer analysis.
Educ. Treat. Child.

being tested. In comparison to the standard format, departed procedurally from the previously mentioned
this reduction in analysis duration created a range of efficiency-based functional analysis formats by incor-
efficiency improvement of 73% to 87%. Efficiency porating core components distinct from that of the
was achieved by reducing the duration of each indi- standard (Jessel et al., 2016). Hanley et al. (2014)
vidual session to the time required to evoke one retained a single test condition, even when multiple
instance of problem behavior and provide the putative contingencies were suspected of influencing problem
reinforcer for the behavior. The trial-based format has behavior, by synthesizing the reinforcement contin-
since been validated in several classroom applications gencies that were reported to co-occur based on the
(e.g., Austin et al., 2015; Bloom et al., 2011, 2013). qualitative information obtained from an open-ended
Instead of changing the format of the analysis, interview. The same reinforcers provided contingent
Thomason-Sassi et al. (2011) described a change on problem behavior in the test condition were then
in the measurement of problem behavior from typi- freely available in the matched control condition. This
cal counts (converted to session rates) to latency (to created an individualized analysis specific to each of
the first instance of problem behavior) to improve the three children with autism who exhibited severe
efficiency and reduce exposure to potentially dan- problem behavior. The average IISCA duration across
gerous problem behavior. The latency-based format the participants was 23 min, which is around a 95%
shares similarities with the trial-based format in that improvement in efficiency from the standard format.
each session requires only one response. Thomason- Although improving efficiency may be desirable
Sassi et al. first retrospectively analyzed the latency among clinicians because of practical benefits (e.g.,
to the first problem behavior in each session from reduced resources, greater consumer satisfaction,
10-min sessions of procedures identical to the stand- improved safety), it is but one consideration when
ard format for 38 analyses of problem behavior of selecting a functional analysis approach. Functional
individuals diagnosed with intellectual disabilities. analyses must also sufficiently demonstrate func-
The majority of interpretations (87%) between the tional control to guide the design of an effective treat-
latency measures and full-session rates corresponded. ment. In addition, modifying a functional analysis to
Because of the potential confound when sampling improve overall efficiency may necessarily limit data
latency data from full analyses (i.e., the experience collection, which could negatively affect interpreta-
in the full sessions may have affected the latencies tions of functional control. Control during a func-
in subsequent sessions), Thomason-Sassi et al. con- tional analysis is demonstrated when the condition
ducted analyses with 10 additional participants to of contingent reinforcement repeatedly and reliably
determine correspondence between latency-based produces elevated rates of problem behavior in com-
procedures and subsequent full-session, rate-based parison to that of a condition without the contingent
procedures. Nine of the 10 latency-based functional delivery of those identical reinforcers (Thompson
analysis results corresponded with the standard func- & Iwata, 2005). Clinicians often require a sufficient
tional analyses conducted after the latency analyses. demonstration of control during a functional analy-
The latency-based functional analysis required, on sis in order to inform the selection of function-based
average, two more sessions than the standard proce- procedures of a treatment package. To assist in mak-
dures; however, each session from the latency-based ing judgements about functional control, a number
analyses required considerably less time. Session time of researchers have proposed developing standard-
during the latency-based analyses ranged from 5 s to ized criteria for interpreting functional analysis data
5 min with an average total analysis time of 45 min (e.g., Fisher et al., 2003; Hagopian et al., 1997; Roane
across participants; this was 73% more efficient et al., 2013).
than the standard procedures applied in this same For example, Hagopian et al. (1997) created a
study, which required an average of 166 min across two-person panel who inspected 64 standard func-
participants. tional analyses and used their expert interpretations to
In a recent attempt at improving efficiency, Han- design binary criteria for determining if control had
ley et al. (2014) introduced the interview-informed, been established. The resulting binary structured cri-
synthesized contingency analysis (IISCA), which teria involved designating two criterion lines based
Educ. Treat. Child.

on the mean rate of problem behavior during the con- evaluated the standard and variants of the standard
trol condition. The upper criterion line was drawn functional analysis format in Study 1 and reserved
one standard deviation above the mean, whereas the any formats with fewer than two standard components
lower criterion line was drawn one standard devia- (i.e., IISCA) to be evaluated in Study 2. The distinc-
tion below. Hagopian et al. deemed control during a tion was made to decrease comparisons between
functional analysis to be demonstrated in a test con- wholly different procedural arrangements.
dition if the difference of the upper and lower crite-
rion lines was more than four points with at least two
occurring in the second half of the assessment. The Study 1: Standard Functional Analysis and Its
authors found the binary structured criteria increased Variants
the accurate interpretation of functional analyses with
three predoctoral interns. The standard functional analysis is often reported
Jessel et al. (2020b) attempted to refine the binary to require extended amounts of time to conduct
structured criteria to allow for a more nuanced inter- (Iwata et al., 1982/1994a, 1994b, 1994c). Therefore,
pretation of control. The multilevel structured crite- researchers expressly designed variants of the stand-
ria involved four designated levels (no control, weak, ard functional analysis with the purpose of improv-
moderate, strong). The level of control was depend- ing elements of practicality. We conducted a review
ent on (1) the amount of overlap between the test of efficiency of the functional analysis formats that
condition and control condition and (2) the number contained two or more standard components to under-
of sessions with problem behavior exhibited during stand if these formats in fact reduced the necessary
the control. Jessel, Metras et al. evaluated 26 differ- time to conduct a functional analysis in comparison
ent functional analyses using the multilevel structured to the standard. The functional analyses included for
criteria and found that the majority of analyses were review in Study 1 had to contain at least two standard
determined to have strong control. However, these components because the IISCA format, reviewed in
outcomes are somewhat limited in that the authors Study 2, could share one component with the stand-
only conducted and evaluated the IISCA. Using this ard. In addition, functional analyses with one or fewer
metric to evaluate the relative levels of control dem- standard components shared more formal similari-
onstrated by other functional analysis arrangements ties with the IISCA format. We included a measure
would provide a useful addition to the literature. of control to determine if any improvements in effi-
We conducted this study evaluating efficiency and ciency came at the expense of interpretations of
control afforded by the different functional analysis control, thus potentially counteracting any benefits
formats from the literature published during the past achieved.
10 years (2010–2020) in a two-part study. We ana-
lyzed and compared functional analyses of the vari- Method
ous formats only from the last decade to reduce any
disparities that may be a function of format longev- Articles included in the current review met the same
ity (i.e., older applications of formats may have taken criteria as Jessel et al. (2020a). We selected articles
longer to conduct simply because of the absence of between the years 2010 and 2020 (July) and obtained
other advances in analytic practices).2 Functional most from the Jessel, Hanley et al. reference list.
analyses were categorized in two ways: based on the Articles beyond the years reviewed in Jessel, Han-
number of standard components included (i.e., all ley et al. were identified using identical search cri-
five to none) and based on functional analysis for- teria. We included functional analysis applications
mats (brief, trial-based, latency-based, IISCA). We if the session means were graphically represented.
This excluded any summaries, reviews, or analyses
whereby control, and thus efficiency, could not be
measured. We included only applications with pro-
2
Efficiency has been improving over the years with early grammatic changes to the consequences for prob-
development typically resulting in longer analyses based on
an unpublished data set (available from the first author on lem behavior. This excluded any functional analyses
request).
Educ. Treat. Child.

assessing the isolated evocative control of antecedent being described as precursor behavior. These numeric
stimuli. categorizations were mutually exclusive. We only
In addition to the Jessel et al. (2020a) criteria, analyzed applications with two or more components
three other requirements had to be met. First, we only representative of the standard functional analysis in
reviewed applications evincing a socially mediated Study 1.
function. All functional analyses implicating control
by automatic reinforcement were excluded. Extended Functional analysis formats We then determined
analysis duration (i.e., poor efficiency) are more the more general format of each analysis. Each
likely to be the case in functional analyses implicat- application was categorized as one of the following:
ing automatic functions, and some of the functional standard analysis, brief analysis, trial-based analysis,
analysis formats did not include tests that allow one latency-based analysis, IISCA or a default category of
to make the default inference of control by auto- other analysis when it could not be easily categorized
matic reinforcement. Second, we excluded all undif- as one of the five previously named formats.
ferentiated applications, as specified by the articles’
authors, only from the analysis of efficiency (i.e., For an application to be categorized as a standard
undifferentiated applications continued to be included analysis, it had to include the first four components
in the analysis of control). Undifferentiated applica- described above. The application did not require the
tions were excluded from the analysis of efficiency fifth component to be considered a standard analysis
because eventually obtaining differentiated results is because Jessel et al. (2020a) discovered that many
necessary to evaluate efficiency. Third, we considered applications, regardless of the format, were unlikely
each distinct phase an application in functional analy- to include the fifth component of measuring and pro-
sis comparative studies. For example, if the latency- viding consequences for only dangerous problem
based functional analysis was compared to the stand- behavior. In addition, to be categorized as a standard
ard format in an ABAB design, the durations of the analysis, there had to be a minimum of three sessions
four separate applications were measured. per condition. We included the minimum session cri-
terion to distinguish between the brief format, which
was procedurally identical to the standard format, but
Analysis Categorization was scored when only one to two sessions were con-
ducted per condition.
Standard analysis components We first catego- We categorized analyses as latency-based if ses-
rized each functional analysis as including from zero sions were terminated after a single instance of prob-
to five possible components of the standard func- lem behavior (Thomason-Sassi et al., 2011). Latency-
tional analysis. These components were identical based formats graphically represented the time from
to those identified by Jessel et al. (2020a). We then the onset of the session to the first instance of prob-
evaluated each functional analysis application to lem behavior. Therefore, the latency-based format
determine if it included (1) multiple test conditions was a departure from the more traditional use of rate
(Component 1) or the alternative of a single test con- of overall problem behavior in a programmed session
dition, (2) uniform test conditions (Component 2) or duration and how long a participant experienced a
the alternative of unique test conditions, (3) isolated session in a latency-based format was dependent on
test contingencies (Component 3) or the alternative when problem behavior occurred.
of synthesized contingencies, (4) an omnibus of play Applications were categorized as the trial-based
control condition (Component 4) or the alternative format if the applications relied on an aggregation
of a test-matched control in which the only differ- of discrete trials in place of sessions (Bloom et al.,
ence between the test and control condition was the 2011; Sigafoos & Saggers, 1995). In other words,
presence or absence of the assessed contingency, and each trial ended following either a single response or
(5) providing putative reinforcers for only dangerous a brief period without a response, and each test con-
behavior (Component 5) or the alternative of provid- dition had its own corresponding control condition.
ing putative reinforcers for both dangerous and non- The control condition was sometimes the reinforce-
dangerous behavior, the latter of which sometimes ment interval before the test trial was initiated, the
Educ. Treat. Child.

reinforcement interval after the test trial was initiated, in the analysis of efficiency. For example, Hausman
or both the reinforcement interval before and after the et al. (2009) conducted a standard functional analy-
test trial was initiated. sis to assess the problem behavior of a 9-year-old
We categorized an application as an IISCA if it girl diagnosed with intellectual and developmental
included (1) multiple sessions of a single, interview- disabilities. The initial results were undifferentiated
informed test condition, (2) a synthesized contin- and the researchers conducted a second functional
gency when the interview suggested more than one analysis modifying four of the five components of the
possible reinforcement contingency, and (3) a test- standard (i.e., secondary functional analysis included:
matched control condition. In other words, an appli- single test, unique condition, synthesized contin-
cation was categorized as an IISCA when the first gency, matched control). In the current review, we
four components of the standard analysis were absent. would only calculate a measure of efficiency for the
As with all other functional analysis formats, puta- modified functional analysis in which a functional
tive reinforcers could have been provided for either relation was implicated.
only dangerous problem behavior or both dangerous The analysis duration experienced by the partici-
and nondangerous problem behavior. Furthermore, pants was not obtainable from the trial-based analy-
we included applications for further review if the ses. Authors did not provide individual trial durations
authors used the term IISCA to describe the proce- in these studies due to aggregate data presentations
dures or cited Hanley et al. (2014) as informing the and the programmed analysis duration had to be
design of the functional analysis format. We extracted calculated instead (e.g., trial duration multiplied by
any application categorized as an IISCA from the full number of reported trials). With the trial-based for-
data set and analyzed them in Study 2. mat, we calculated two application durations. A mini-
Any functional analysis applications not meeting mum value was calculated by multiplying the total
one or more of the requirements described for each number of trials by how long the test trials would be
format above was categorized as other. For example, if the participant responded immediately within 1 s.
Eluri et al. (2016) began conducting a functional anal- A maximum value was calculated by multiplying the
ysis with the four components of the standard format. total trials by how long the test trials would be if the
However, the therapist noted in observations during participated responded at the end of the trial (or not
the toy play control and through informal reports at all). With the latency-based format, we extracted
from the participant’s mother that the participant individual latencies using a computer program
would begin to engage in problem behavior when his (WebPlotDigitizer®). The dimensions of the x and
mands were not honored. Therefore, the researchers y axis of each figure were entered into the program
included an additional condition (i.e., mand compli- and the value of each datum was then automatically
ance) and the functional analysis represented a com- calculated.
bination of components, thereby resulting in the cat-
egorization of other. Analysis of Control

Analysis of Efficiency We used the same multilevel structured criteria devel-


oped by Jessel et al. (2020b, 2020c) to analyze degree
The time required to obtain a demonstration of behav- of control. Applications were determined to have
ioral function, as specified by the authors, was the strong control if there was no overlap3 between the
measure of efficiency. We multiplied the session dura- test and control conditions and no problem behavior
tions by the number of sessions conducted to obtain a during any sessions of the control condition. Applica-
total number of minutes for each application included tions with moderate control had at least one session
in the review. This quantitative measurement of effi-
ciency using the assessment duration was identical to
3
that used by Saini et al. (2018). As mentioned in the Overlap during the trial-based format was considered if the
exclusion criteria, functional analyses with automatic aggregate bar of the test condition did not extend to the full
100%. This was because an individual trial in which problem
functions (even if reported to be multiply controlled) behavior did not occur would have been scored as 0%, inevita-
and undifferentiated outcomes were not included bly overlapping with any trials from the control condition.
Educ. Treat. Child.

of overlap between the test and control conditions or and secondary recorders discussed the difference until
problem behavior occurring in at least one session of a consensus was made. IOA was considered complete
the control condition. Applications with weak con- when agreement was at 100%. However, before dis-
trol met both criteria in that there was some overlap cussing the disagreements, the secondary recorder
between conditions and some problem behavior in the independently completed his review of the data and
control condition. We only evaluated test conditions IOA for all components was 97%.
representing a differentiated outcome (as reported by
the authors) as having strong, moderate, or weak con- Results
trol. Functional analyses implicating multiple func-
tions across separate test conditions were all included We included a total of 169 articles between the years
and the lowest level obtained was recorded. That is, of 2010 to 2020 in this review. From these articles,
we evaluated overall control during the functional there were 768 applications of functional analysis. We
analysis with all differentiated test conditions consid- excluded 69 applications from further review because
ered together. automatic reinforcement was identified as influenc-
Applications with no control had undifferentiated ing problem behavior. Another 43 applications were
analyses with extensive overlap. Applications for excluded because no graph of the functional analy-
which control could not be evaluated due to an insuf- sis data was present. We removed an additional 94
ficient number of data points (i.e., at least two points applications from the analysis of efficiency that were
per condition) were coded in the no-control category. undifferentiated or did not have session durations
We required a minimum of two data points to meet reported.4 The remaining 16 analyses that did not
a categorization of control because within-subject meet the inclusion criteria (e.g., no control condition,
replication cannot be demonstrated without repeated review, group data) were removed, leaving a total
measures (Johnston & Pennypacker, 2009). This often of 546 applications evaluated in the analysis of effi-
occurred when reviewing brief functional analy- ciency and 640 applications evaluated in the analysis
ses because limited sessions is a defining feature of of control. In Study 1, we extracted 189 applications
this format. However, levels of control could still be with zero or one standard analysis component and
determined even if a limited number of sessions were 145 applications identified as IISCAs to be evaluated
conducted had the authors included a within-session further in Study 2. This left 451 applications with two
analysis allowing for a visual analysis of multiple to five standard components and 495 applications of
data points within each session. the standard format and its variants included in the
analyses of Study 1 (numbers differ because not all
Intercoder Agreement applications extracted with zero or one standard anal-
ysis component were identified as the IISCA format).
The first author coded all included articles and The top panel of Fig. 1 represents the average
research assistants independently coded 30% of those application durations based on the number of standard
articles. We then calculated a point-by-point agree- components used. Overall, more components tended
ment for the components and functional analysis for- to result in longer analysis durations. The applica-
mats from the application categorization, the obtained tions with the longest durations had five components
duration of each functional analysis from the analysis (M = 168.35 min; SD = 132.78) and four components
of efficiency, and the identified level from the analysis (M = 165.95 min; SD = 123.33). This was followed by
of control. We defined an agreement as both the pri- applications with three (M = 116.5 min; SD = 83.99)
mary and secondary coders recording identical val- and two components (M = 83.16 min; SD = 67.52).
ues for each application. For example, the researcher The bottom panel of Fig. 1 depicts the results of the
scored an agreement if both coders categorized an multilevel structured criteria of control. Applications
application as having multiple test conditions (i.e.,
Component 1). Based on the analysis of efficiency, we
defined a disagreement as the duration of the applica- 4
The number of applications will vary between analyses of
tion calculated by one coder being different by more efficiency and control because of these additional exclusion
than one min. If a disagreement occurred the primary criteria.
Educ. Treat. Child.

Fig. 1  Functional Analyses


Categorized by Number of
Standardized Components 5 n = 123
(2 through 5). Note. Top
panel depicts the dura-
tion of applications and
4 n = 185

Standard Functional Analysis Components


the bottom panel depicts
the level of control. Fewer n = 55
3
applications were included
in the analysis of efficiency
after removing undifferenti- 2 n = 19
ated outcomes. Applica-
tions with one or fewer
components were extracted (2010-2020) 50 100 150 200 250 300 350
and analyzed in Study 2
because they were unlikely Analysis Duration (min)
to be representative of the
standard functional analysis
format
5 n = 153

4 n = 205

3 n = 70

2 n = 23

20 40 60 80 100
Percentage of Analyses with
Strong Moderate Weak No
Control

with all five components were almost equally likely to followed by the default category of other formats
have strong (31%) and moderate (34%) control. The (M = 102.12 min; SD = 78.23). The trial-based format
remaining applications were interpreted as having was the least efficient of the efficiency-based formats
weak (19%) and no control (16%). Many applications requiring the most amount of time whether it was the
with four standardized components had moderate minimum (M = 92.35 min; SD = 82) or maximum
control (37%), followed by applications with strong (M = 157.71 min; SD = 118.41) duration calculated.
(27%), weak (26%), and no control (10%). A near Nevertheless, the minimum and maximum durations
majority of applications with three components had were 49% and 16% more efficient than the standard
weak control (46%). The remaining applications had analyses, respectively. The latency-based format
moderate (30%), strong (17%), and some with no con- reduced the mean duration of conducting the assess-
trol (7%). A large percentage of applications with two ment by 64% (M = 68.32 min; SD = 41.49). The brief
components were mostly split between having moder- format increased efficiency by 73% with a mean anal-
ate control (30.5%) and no control (30.5%), followed ysis duration of 51.25 min (SD = 22.86).
by strong control (26%) and weak control (13%). The bottom panel of Fig. 2 depicts the results of
Figure 2 (top panel) depicts the average appli- the analysis of control categorized by functional
cation duration of each functional analysis format. analysis format. The standard had many applica-
The standard format required the most amount of tions with moderate (35%), strong (31%), and weak
time to conduct (M = 185.02 min; SD = 126.78) (24%) control, and relatively few applications with no
Educ. Treat. Child.

Fig. 2  Representation of
the Standard Functional Standard n = 236
Analysis and its Variants.
Note. The top panel depicts Other n = 71
the duration of the applica-
tions and the bottom panel Trial-Based Min Max n = 49
depicts the level of control
Latency-Based n = 31

Functional Analysis Formats


Brief n = 32

(2010-2020)
50 100 150 200 250 300 350
Analysis Duration (min)

Standard n = 286

Other n = 92

Trial-Based n = 52

Latency-Based n = 32

Brief n = 33

20 40 60 80 100
Percentage of Analyses with
Strong Moderate Weak No
Control

control (10%). The default category of other analy- comparison for all the efficiency-based iterative for-
ses had mostly moderate control (46%), followed by mats. In other words, the most pragmatic format will
applications with strong (26%), weak (15%), and no require less than the 185-min mean duration of the
(13%) control. The majority of the trial-based format standard format and, at the least, maintain the same
had weak control (56%) and many applications with moderate level of control. Functional analysis formats
moderate control (40%). Relatively few trial-based that are more efficient but have less control than the
applications had strong control (4%). The latency- standard may allow the clinician to advance to treat-
based format had the most weak (41%), moderate ment more quickly; however, the efficacy of treat-
(31%), and strong (25%) analyses, respectively. Only ments informed by efficiency-based functional analy-
a relative few of the applications of the latency-based ses may be of concern. On the other hand, functional
format had no control (3%). Many of the brief analy- analysis formats that are less efficient but have more
ses did not have control (70%). The remaining few control than the standard are likely to be highly spe-
had strong (12%), moderate (12%), and weak (6%) cialized practices that would be difficult to implement
control. in clinical settings.
The brief functional analysis created a far more
Discussion efficient format; however, it came at the expense of
experimental control. Similar findings have been
We found that including more components of the reported in the past regarding the difficulty in inter-
standard functional analysis made the process longer preting the results of the brief format (Kahng & Iwata,
and did not necessarily improve control. When con- 1999; Vollmer et al., 1995). For example, Kahng
sidering specific functional analysis formats, the and Iwata reanalyzed the correspondence between
standard functional analysis served as a benchmark the first session of each condition from a standard
Educ. Treat. Child.

functional analysis and compared the interpretations irreparable damage (Thomason-Sassi et al., 2011), its
to that of the full data set in the multielement design. utility as a functional analysis format is slightly called
The authors found that, based on the single point, into question considering the degradation in control.
there was a 23% chance of an evaluator not being able In fact, all three of the efficiency-based formats (brief,
to identify a function when one was implicated and trial-based, latency-based) had an inverse effect on
a 60% chance of incorrectly identifying a function control in order to achieve efficiency. Based on these
when none was implicated. This lack of correspond- outcomes alone it is difficult to recommend the use of
ence is important to consider because, other than the these efficiency-based formats, considering that they
difference in the number of sessions conducted, the may not be as informative of functional relations in
brief and standard formats are methodologically iden- comparison to the standard format.
tical and are designed to produce the same outcomes.
Due to these limitations with control, the brief format
may not be the most beneficial format to recommend. Study 2: The Interview‑Informed, Synthesized
However, using brief session durations and reducing Contingency Analysis (IISCA)
analytic clutter as first posited in the original publica-
tion of the brief format (Northup et al., 1991) remain Although functional analysis formats are conducted
important considerations for clinicians. Furthermore, with the general purpose of aiding in the selection
it may be possible to maintain a similarly brief format of effective treatment strategies for reducing prob-
while avoiding procedural components that contribute lem behavior, clinicians and applied researchers may
to the lack of control (Jessel et al., 2020c). choose to include or exclude certain standard com-
The trial-based format moderately improved effi- ponents to meet more specific goals. For example, if
ciency but also had a negative impact on control with the goal is to identify the single effect of combined
most analyses identified as having weak control. The variables, the clinician may decide to conduct a
traditional aggregate display of the results of a trial- functional analysis with a test condition including a
based functional analysis makes for a somewhat dif- synthesized contingency. Although these differences
ficult calculation of efficiency and visual analysis pinpoint potentially divergent preparations to support
of control. The degraded outcomes may have been different goals, the practical utility and necessity of
affected by the more stringent evaluation in that a sin- having the results of functional analyses correspond
gle instance of problem behavior that occurred during when entirely distinct core components are used has
a control trial would create overlap with a test trial. yet to be fully understood. That is, if the results are
It is interesting to note that the trial-based functional not comparable, neither is the process. This is espe-
analysis was not originally developed to be an effi- cially important considering that the functional analy-
cient alternative to the standard. Instead, Sigafoos and sis formats with isolated or synthesized reinforcement
Saggers (1995) developed the trial-based format to be contingencies are purposefully evaluating disparate
seamlessly incorporated into the classroom setting. effects (i.e., main or interactive effects) and, there-
The trials were conducted across a 5-day span and fore, the outcomes are expected to vary. We con-
implemented naturally with the flow of the classroom ducted Study 2 to determine the efficiency and control
routine. This may suggest teachers to be less con- afforded by the IISCA based on all articles published
cerned with efficiency and more concerned with an since the introduction in 2014 until 2020 (July).
assessment disrupting the everyday scheduled events.
However, having a practical and efficient functional Method
analysis format would presumably be preferred.
Likewise, the latency-based format improved effi- We excluded any applications with two or more
ciency but reduced interpretations of control from the standard components because the procedures of the
moderate level achieved using the standard format IISCA are reported to share only one component
to weak control. Although the latency-based format with the standard (Jessel et al., 2020a). That is, like
lends itself to its use with discrete responses (e.g., the standard functional analysis, some IISCAs used
dropping, eloping, disrobing) and dangerous behavior a closed-contingency class whereby the researchers
where multiple exposures to the response could cause only reinforced dangerous problem behavior. This
Educ. Treat. Child.

does not ensure that all functional analyses sharing with a binary measurement of the coders both agree-
fewer than two components with the standard will ing that the application was designated an IISCA or
be identified as an IISCA for two reasons. First, not (i.e., some other functional analysis format). We
the like component can vary and it is not necessar- calculated intercoder agreement for the analyses of
ily the case that the one standard component shared efficiency and control as described in Study 1. Inter-
will be a closed-contingency class. Although this coder agreement for all measures was 100%.
suggests that some functional analyses with less
than two standard components could include iso- Results and Discussion
lated test contingencies, this only occurred in five
applications (e.g., Santiago et al., 2016). Second, The results of the analysis of efficiency including
the authors may identify the idiosyncratic proce- fewer than two components of the standard functional
dures as a different format from that of the IISCA. analysis are represented in Fig. 3 (top panel). Both,
For example, Leon et al. (2013) designed a func- applications with one (M = 49.95 min; SD = 46.31)
tional analysis to evaluate the occurrence of danger- or zero components (M = 36.39 min; SD = 22.43)
ous problem behavior when ritualistic behavior was tended to be relatively brief. The results of the anal-
blocked or prevented. This resulted in the functional ysis of control are presented in the bottom panel in
analysis sharing only a single component with the Fig. 3. The results of the applications with one com-
standard (i.e., closed-contingency class); however, ponent were split between moderate (39%) and strong
the authors identified the format as a “blocking (38%) control. The remaining applications had weak
assessment.” Therefore, we categorized this format (11.5%) or no control (11.5%). The functional analy-
as other and not an IISCA. ses with no standardized components had the most
We included all applications designated as an applications with strong (70%), moderate (23%),
IISCA in the analysis of control. However, much like weak (4%), and no control (3%), in that order.
Study 1, any undifferentiated analyses or applications Figure 4 depicts the results of the analyses of
where an analysis duration could not be calculated efficiency (top panel) and control (bottom panel)
(i.e., session duration not reported) were excluded for the IISCA format. Eighteen of the 145 applica-
from the analysis of efficiency. tions were removed from the analysis of efficiency
because session duration was not reported or the
results were undifferentiated. The IISCA required
Analyses of Efficiency and Control a mean of 30.95 min to conduct and maintained a
mode of strong control. In particular, the majority of
Analyses of efficiency and control were identical to the IISCA applications had strong control (67%), fol-
the procedures described in Study 1. We calculated lowed by moderate control (26%). The remaining few
the efficiency of each application of the IISCA by had weak control (4%) and no control (3%). Results
multiplying the session duration by the number of reported by individual applications can be found in
sessions. Analytic control for each application was the Supplemental Material.
determined using the multilevel structured criteria. The efficiency of the IISCA format can be
The applications could be identified as having strong attributed, either directly or indirectly, to the dis-
(no overlap and no problem behavior in the control tinct components that were included. For example,
condition), moderate (some overlap or some problem the number of total sessions required to identify a
behavior in the control condition), weak (some over- functional relation is minimized with the inclusion
lap and some problem behavior in the control condi- of a single test condition. Reducing the number of
tion), and no control (extensive overlap). test conditions in a single analysis is also likely
to improve discrimination (Iwata et al., 1994b).
Other distinct components may have more indirect
Intercoder Agreement impact on efficiency, such as synthesizing all puta-
tive establishing operations and reinforcers (Slaton
The first author and fourth authors independently & Hanley, 2018). The additive effects of combined
coded all articles. An agreement was first calculated establishing operations and reinforcers may more
Educ. Treat. Child.

Fig. 3  Functional Analyses


Categorized by Number of
Standardized Components
(0 and 1) 1 n = 76

Standard Functional Analysis Components


0 n = 88

(2010-2020)
50 100 150 200 250 300 350
Analysis Duration (min)

1 n = 96

0 n = 93

20 40 60 80 100
Percentage of Analyses with
Strong Moderate Weak No
Control

Fig. 4  Results of the


Analyses of Efficiency
and Control for the IISCA n = 127
IISCA Format (2010-2020)

Efficiency
Format

50 100 150 200 250 300 350


Analysis Duration (min)

Control n = 145

20 40 60 80 100
Percentage of Analyses with
Strong Moderate Weak No
Control

readily evoke problem behavior and maintain ele- contingency based on caregiver reports from an
vated response rates, respectively (Ghaemmaghami open-ended interview could improve differentiated
et al., 2016). Likewise, the design of a unique responding during the IISCA due to the inclusion
Educ. Treat. Child.

of historically relevant discriminative stimuli. Although analysis duration allowed for an eas-
Thus, reducing analytic clutter can have multiple ily quantifiable definition of efficiency, the scope of
practical benefits and the IISCA is designed to the measure is somewhat limited. That is, due to the
combine those components supporting an efficient usual availability of session duration and number of
display of functional control to capitalize on imme- sessions conducted, we measured efficiency in the
diate differentiation instead of reserving those pro- current analysis as the amount of session time dedi-
cedures for secondary or tertiary modifications. cated to a demonstration of function; however, we
could understand the relative efficiency of different
functional analysis formats better if more informa-
General Discussion tion (e.g., number of clinical visits and dates) were
included in published functional analysis studies. It is
The goal of the present analyses was to evaluate the perhaps for this reason of convenience that previous
efficiency and control demonstrated by standard and measures of efficiency have been limited to the analy-
synthesized functional analysis preparations. The sis duration (Querim et al., 2013; Saini et al., 2018).
results of Study 1 suggest that the standard functional It is important to note that any advances in an
analysis maintained the highest levels of interpretive applied science will naturally be guided towards
control when compared to other efficiency-based for- improving the ease with which the basic principles
mats with mutual core components; however, it also can be translated into useable, acceptable, and pref-
appeared to be the least efficient choice, consistently erable technologies. With respect to functional analy-
requiring more time to conduct. These outcomes ses, procedural modifications should only be accepted
suggest a clear trade-off, whereby the use of brief, by clinicians if the new practical procedures (1) con-
trial-based, or latency-based functional analysis for- sistently identify functional relations and (2) inform
mats will likely be faster than standard analyses but effective, function-based treatment. The functional
may not provide the same confidence in demonstrat- analysis is only as good as the treatment it informs,
ing functional control. It is interesting that each effi- which has been commonly identified as treatment
ciency-based format accounted for less than 10% of validity (Hayes et al., 1987; Kratochwill & Shapiro,
the applications. It is possible that the less frequent 2000). That is to say, the functional analysis is only
representation of these formats reflects recognition of a necessary clinical tool if it informs treatments that
these limitations in interpretations of control. Then are more effective than those that would have been
again, researchers may have more time and resources selected had the functional analysis not been used.
to commit to conducting more extended analyses than Comparisons of treatment validity between the
do practitioners (e.g., clinicians, teachers). Therefore, standard format and IISCA-based interventions
it is difficult to draw firm conclusions about why effi- show that IISCA-based interventions result in simi-
ciency-based variants of the standard functional anal- lar (Holehan et al., 2020) or more pronounced (Sla-
ysis are relatively sparse in the literature. ton et al., 2017) reductions in problem behavior. For
Considering the contrasting procedures of the example, Slaton et al. conducted both the IISCA and
IISCA and standard functional analysis, we con- standard functional analysis with nine participants
ducted the evaluations of efficiency and control sepa- with autism who exhibited problem behavior. The
rately. The results of Study 2 showed that the IISCA IISCA produced strong levels of interpretative con-
required only a brief amount of time to conduct but trol with all participants and only required a mean of
maintained strong levels of interpretative control. 27.5 min. to conduct. On the other hand, only four of
In fact, the IISCA was completed within 15 min on the standard functional analyses were initially differ-
multiple occasions and there was no indication that entiated and those four analyses required a mean of
this brevity affected level of control. It seems that 89 min to conduct to obtain a mix of control rang-
efficiency may not necessarily have to come at the ing from weak to strong. The authors then conducted
expense of interpretations of control and clinicians two “function-based” treatments (one informed by the
may find the IISCA to be a viable, practical option for results of the IISCA and one informed by the results
informing treatment procedures. of the standard) with four participants to validate the
outcomes. The treatment informed by the results of
Educ. Treat. Child.

the IISCA were effective in all cases and more effec- process implemented for five children with autism
tive than the treatment informed by the results of and their families along with the usual metrics of
the standard functional analysis in two of those four number and durations of sessions. Future researchers
cases. should report similar and additional measures so that
It is important to acknowledge that interpreta- an improved understanding of the relative efficiency
tions of relative treatment validity between functional of various functional assessment and treatment proce-
analysis formats is based on limited research, which dures can be developed.
have only compared variations of functional com-
munication training (FCT). In addition, most have Data Availability Associated data supporting our findings is
in a data repository available upon request from the first author.
excluded broader measures of behavioral improve-
ment, including social validity, extensions to relevant Declarations
individuals and contexts, and long-term follow-up
(c.f., Ghaemmaghami et al., 2021). Researchers may Ethical Approval This article does not contain any studies
want to direct greater focus to the relation between with human participants performed by any of the authors.
functional analysis and treatment validity, because
considering an evaluation of a functional analy- Informed Consent For this type of study formal consent is
sis format without associated treatment outcomes is not required.
somewhat incomplete. Heyvaert et al.’s (2014) meta-
analysis on characteristics of behavioral interven- Conflict of Interest Joshua Jessel declares a part-time con-
sulting position at FTF Behavioral Consulting. Gregory P. Han-
tions for problem behavior may be instructive in these ley declares that he is the owner and founder of FTF Behavioral
efforts. The authors found that behavioral treatments Consulting. Mahshid Ghaemmaghami declares that she is a lead
from the 358 applications evaluated were effective in consultant and clinical director at FTF Behavioral Consulting.
reducing problem behavior regardless of the topogra- Matthew J. Carbone declares a part-time consulting position at
FTF Behavioral Consulting.
phy of problem behavior targeted. It should be noted,
however, that interventions preceded by a functional
analysis significantly improved treatment outcome. A
similar approach for evaluating the treatment valid- References
ity of the functional analysis can be adapted further
by disaggregating data and analyzing treatment out- Austin, J., & Carr, J. E. (2000). Handbook of applied behavior
comes across functional analysis components, for- analysis. Context Press.
Austin, J. L., Groves, E. A., Reynish, L. C., & Francis, L. L.
mats, and level of control. That is to say, data fall- (2015). Validating trial-based functional analyses in main-
ing under the umbrella term of “functional analysis” stream primary school classrooms. Journal of Applied
could be segmented into subcategories of various Behavior Analysis, 48(2), 274–288. https://​doi.​org/​10.​
moderators and evaluated separately. Such an analy- 1002/​jaba.​208
Azrin, N. H. (1977). A strategy for applied research: Learn-
sis could help clinicians by (1) identifying which core ing based but outcome oriented. American Psychologist,
components are associated with the best treatment 32(2), 140–149. https://​doi.​org/​10.​1037/​0003-​066X.​32.2.​
outcomes, (2) creating a hierarchy of treatment valid- 140
ity among functional analysis formats, and (3) deter- Behavior Analyst Certification Board. (2014). Professional and
ethical compliance code for behavior analysts.
mining if strong levels of control are indicative of bet- Bellone, K. M., Dufrene, B. A., Tingstrom, D. H., Olmi, D. J.,
ter treatment outcomes. & Barry, C. (2014). Relative efficacy of behavioral inter-
Future researchers may want to create a measure of ventions in preschool children attending Head Start. Jour-
efficiency incorporating economic benefits, because nal of Behavioral Education, 23(3), 378–400. https://​doi.​
org/​10.​1007/​s10864-​014-​9196-6
the two are likely to be related. Analyses requiring Bloom, S. E., Iwata, B. A., Fritz, J. N., Roscoe, E. M., & Car-
fewer people, fewer dedicated hours, accessible mate- reau, A. B. (2011). Classroom application of a trial-based
rials, or less costly support to manage problem behav- functional analysis. Journal of Applied Behavior Analysis,
ior could be considered more efficient. Hanley et al. 44(1), 19–31. https://​doi.​org/​10.​1901/​jaba.​2011.​44-​19
Bloom, S. E., Lambert, J. M., Dayton, E., & Samaha, A. L.
(2014) as well as Santiago et al. (2016) included the (2013). Teacher-conducted trial-based functional analyses
number of visits, calendar days, and personnel costs as the basis for intervention. Journal of Applied Behavior
required during the entire assessment and treatment Analysis, 46(1), 208–218. https://​doi.​org/​10.​1002/​jaba.​21
Educ. Treat. Child.

Derby, K. M., Wacker, D. P., Sasso, G., Steege, M., Northup, J., Research in Developmental Disabilities, 35(10), 2463–
Cigrand, K., & Asmus, J. (1992). Brief functional assess- 2476. https://​doi.​org/​10.​1016/j.​ridd.​2014.​06.​017
ment techniques to evaluate aberrant behavior in an outpa- Holehan, K. M., Dozier, C. L., Diaz de Villegas, S. C., Jess, R.
tient setting: A summary of 79 cases. Journal of Applied L., Goddard, K. S., & Foley, E. A. (2020). A comparison
Behavior Analysis, 25(3), 713–721. https://​doi.​org/​10.​ of isolated and synthesized contingencies in functional
1901/​jaba.​1992.​25-​713 analyses. Journal of Applied Behavior Analysis, 53(3),
Derby, K. M., Wacker, D. P., Peck, S., Sasso, G., DeRaad, A., 1559–1578. https://​doi.​org/​10.​1002/​jaba.​700
Berg, W., Asmus, J., & Ulrich, S. (1994). Functional anal- Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., &
ysis of separate topographies of aberrant behavior. Jour- Richman, G. S. (1994a). Toward a functional analysis
nal of Applied Behavior Analysis, 27(2), 267–278. https://​ of self-injury. Journal of Applied Behavior Analysis, 27,
doi.​org/​10.​1901/​jaba.​1994.​27-​267 197–209. (Reprinted from Analysis & Intervention in
Eluri, Z., Andrade, I., Trevino, N., & Mahmoud, E. (2016). Developmental Disabilities, 2, 3–20, 1982.) https://​doi.​
Assessment and treatment of problem behavior main- org/​10.​1901/​jaba.​1994.​27-​197
tained by mand compliance. Journal of Applied Behavior Iwata, B. A., Duncan, B. A., Zarcone, J. R., Lerman, D. C., &
Analysis, 49(2), 383–387. https://​doi.​org/​10.​1002/​jaba.​296 Shore, B. A. (1994b). A sequential, test-control method-
Fisher, W. W., Kelley, M. E., & Lomas, J. E. (2003). Visual ology for conducting functional analyses of self-injurious
aids and structured criteria for improving visual inspec- behavior. Behavior Modification, 18(3), 289–306. https://​
tion and interpretation of single-case designs. Journal of doi.​org/​10.​1177/​01454​45594​01830​03
Applied Behavior Analysis, 36(3), 387–406. https://​doi.​ Iwata, B. A., Pace, G. M., Dorsey, M. F., Zarcone, J. R.,
org/​10.​1901/​jaba.​2003.​36-​387 Vollmer, T. R., Smith, R. G., Rodgers, T. A., Lerman,
Ghaemmaghami, M., Hanley, G. P., Jin, S. C., & Vanselow, N. D. C., Shore, B. A., Mazaleski, J. L., Goh, H.-L., Cow-
R. (2016). Affirming control by multiple reinforcers via dery, G. E., Kalsher, M. J., McCosh, K. C., & Willis, K.
progressive treatment analysis. Behavioral Interventions, D. (1994c). The functions of self-injurious behavior:
31(1), 70–86. https://​doi.​org/​10.​1002/​bin.​1425 An experimental-epidemiological analysis. Journal of
Ghaemmaghami, M., Hanley, G. P., & Jessel, J. (2021). Func- Applied Behavior Analysis, 27(2), 215–240. https://​doi.​
tional communication training: From efficacy to effective- org/​10.​1901/​jaba.​1994.​27-​215
ness. Journal of Applied Behavior Analysis, 54(1), 122– Jessel, J., Hanley, G. P., & Ghaemmaghami, M. (2016). Inter-
143. https://​doi.​org/​10.​1002/​jaba.​762 view-informed synthesized contingency analyses: Thirty
Hagopian, L. P., Fisher, W. W., Thompson, R. H., Owen- replications and reanalysis. Journal of Applied Behavior
DeSchryver, J., Iwata, B. A., & Wacker, D. P. (1997). Analysis, 49(4), 576–595. https://​doi.​org/​10.​1002/​jaba.​316
Toward the development of structured criteria for inter- Jessel, J., Hanley, G. P., & Ghaemmaghami, M. (2020a). On
pretation of functional analysis data. Journal of Applied the standardization of the functional analysis. Behavior
Behavior Analysis, 30(2), 313–326. https://​doi.​org/​10.​ Analysis & Practice, 13(1), 205–216. https://​doi.​org/​10.​
1901/​jaba.​1997.​30-​313 1007/​s40617-​019-​00366-1
Hanley, G. P. (2012). Functional assessment of problem behav- Jessel, J., Metras, R., Hanley, G. P., Jessel, C., & Ingvarsson, E.
ior: Dispelling myths, overcoming implementation obsta- T. (2020b). Does analysis brevity result in loss of control?
cles, and developing new lore. Behavior Analysis in Prac- A consecutive case series of 26 single-session interview-
tice, 5(1), 54–72. https://​doi.​org/​10.​1007/​BF033​91818 informed synthesized contingency analyses. Behavioral
Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003). Func- Interventions, 35(1), 145–155. https://​doi.​org/​10.​1002/​
tional analysis of problem behavior: A review. Journal of bin.​1695
Applied Behavior Analysis, 36(2), 147–185. https://​doi.​ Jessel, J., Metras, R., Hanley, G. P., Jessel, C., & Ingvarsson,
org/​10.​1901/​jaba.​2003.​36-​147 E. T. (2020c). Evaluating the boundaries of analytic effi-
Hanley, G. P., Jin, C. S., Vanselow, N. R., & Hanratty, L. A. ciency and control: A consecutive controlled case series
(2014). Producing meaningful improvements in problem of 26 functional analyses. Journal of Applied Behavior
behavior of children with autism via synthesized analyses Analysis, 53(1), 25–43. https://​doi.​org/​10.​1002/​jaba.​544
and treatments. Journal of Applied Behavior Analysis, Johnston, J. M., & Pennypacker, H. S. (2009). Strategies and
47(1), 16–36. https://​doi.​org/​10.​1002/​jaba.​106 tactics of behavioral research ­(3rd ed.). Routledge.
Hausman, N., Kahng, S. W., Farrell, E., & Mongeon, C. Kahng, S. W., & Iwata, B. A. (1999). Correspondence between
(2009). Idiosyncratic functions: Severe problem behavior outcomes of brief and extended functional analyses. Jour-
maintained by access to ritualistic behaviors. Education & nal of Applied Behavior Analysis, 32(2), 149–159. https://​
Treatment of Children, 32(1), 77–87. doi.​org/​10.​1901/​jaba.​1999.​32-​149
Hayes, S. C., Nelson, R. O., & Jarrett, R. B. (1987). The treat- Kratochwill, T. R., & Shapiro, E. S. (2000). Conceptual foun-
ment utility of assessment. A functional approach to eval- dations of behavioral assessment in schools. In E. S. Sha-
uating assessment quality. The American Psychologist, piro & T. R. Kratochwill (Eds.), Behavioral assessment in
42(11), 963–974. https://​doi.​org/​10.​1037/​0003-​066X.​42.​ schools (pp. 3–15). Guilford Press.
11.​963 LeGray, M. W., Dufrene, B. A., Sterling-Turner, H., Olmi,
Heyvaert, M., Saenen, L., Campbell, J. M., Maes, B., & Ong- D. J., & Bellone, K. (2010). A comparison of function-
hena, P. (2014). Efficacy of behavioral interventions for based differential reinforcement interventions for children
reducing problem behavior in persons with autism: An engaging in disruptive classroom behavior. Journal of
updated quantitative synthesis of single-subject research. Behavioral Education, 19(3), 185–204. https://​doi.​org/​10.​
1007/​s10864-​010-​9109-2
Educ. Treat. Child.

Leon, Y., Lazarchick, W. N., Rooker, G. W., & DeLeon, I. G. Santiago, J. L., Hanley, G. P., Moore, K., & Jin, C. S. (2016).
(2013). Assessment of problem behavior evoked by dis- The generality of interview-informed functional analy-
ruption of ritualistic toy arrangements in a child with ses: Systematic replications in school and home. Journal
autism. Journal of Applied Behavior Analysis, 46(2), 507– of Autism & Developmental Disorders, 46(3), 797–811.
511. https://​doi.​org/​10.​1002/​jaba.​41 https://​doi.​org/​10.​1007/​s10803-​015-​2617-0
Northup, J., Wacker, D., Sasso, G., Steege, M., Cigrand, K., Sigafoos, J., & Saggers, E. (1995). A discrete-trial approach
Cook, J., & DeRaad, A. (1991). A brief functional analy- to the functional analysis of aggressive behaviour in two
sis of aggressive and alternative behavior in an outclinic boys with autism. Australia & New Zealand Journal of
setting. Journal of Applied Behavior Analysis, 24(3), 509– Developmental Disabilities, 20(4), 287–297. https://​doi.​
522. https://​doi.​org/​10.​1901/​jaba.​1991.​24-​509 org/​10.​1080/​07263​86950​00356​21
Oliver, A. C., Pratt, L. A., & Normand, M. P. (2015). A sur- Slaton, J. D., & Hanley, G. P. (2018). Nature and scope of syn-
vey of functional behavior assessment methods used by thesis in functional analysis and treatment of problem
behavior analysts in practice. Journal of Applied Behavior behavior. Journal of Applied Behavior Analysis, 51(4),
Analysis, 48(4), 817–829. https://​doi.​org/​10.​1002/​jaba.​256 943–973. https://​doi.​org/​10.​1002/​jaba.​498
Querim, A. C., Iwata, B. A., Roscoe, E. M., Schlichenmeyer, Slaton, J. D., Hanley, G. P., & Raftery, K. J. (2017). Interview-
K. J., Ortega, J. V., & Hurl, K. E. (2013). Functional anal- informed functional analyses: A comparison of synthe-
ysis screening for problem behavior maintained by auto- sized and isolated components. Journal of Applied Behav-
matic reinforcement. Journal of Applied Behavior Analy- ior Analysis, 50(2), 252–277. https://​doi.​org/​10.​1002/​jaba.​
sis, 46(1), 47–60. https://​doi.​org/​10.​1002/​jaba.​26 384
Roane, H. S., Fisher, W. W., Kelley, M. E., Mevers, J. L., & Thompson R. H., & Iwata, B. A. (2005). A review of reinforce-
Bouxsein, K. J. (2013). Using modified visual-inspection ment control procedures. Journal of Applied Behavior
criteria to interpret functional analysis outcomes. Journal Analysis, 38(2), 257–278. https://​doi.​org/​10.​1901/​jaba.​
of Applied Behavior Analysis, 46(1), 130–146. https://​doi.​ 2005.​176-​03
org/​10.​1002/​jaba.​13 Thomason-Sassi, J. L., Iwata, B. A., Neidert, P. L., & Roscoe,
Roscoe, E. M., Phillips, K. M., Kelly, M. A., Farber, R., & E. M. (2011). Response latency as an index of response
Dube, W. V. (2015). A statewide survey assessing practi- strength during functional analyses of problem behav-
tioners’ use and perceived utility of functional assessment. ior. Journal of Applied Behavior Analysis, 44(1), 51–67.
Journal of Applied Behavior Analysis, 48(4), 830–844. https://​doi.​org/​10.​1901/​jaba.​2011.​44-​51
https://​doi.​org/​10.​1002/​jaba.​259 Vollmer, T. R., Marcus, B. A., Ringdahl, J. E., & Roane, H.
Saini, V., Fisher, W. W., & Retzlaff, B. J. (2018). Predictive S. (1995). Progressing from brief assessments to extended
validity and efficiency of ongoing visual-inspection crite- experimental analyses in the evaluation of aberrant behav-
ria for interpreting functional analyses. Journal of Applied ior. Journal of Applied Behavior Analysis, 28(4), 561–
Behavior Analysis, 51(2), 303–320. https://​doi.​org/​10.​ 576. https://​doi.​org/​10.​1901/​jaba.​1995.​28-​561
1002/​jaba.​450 Wallace, M. D., & Iwata, B. A. (1999). Effects of session dura-
Saini, V., Fisher, W. W., Retzlaff, B. J., & Keevy, M. (2020). tion on functional analysis outcomes. Journal of Applied
Efficiency in functional analysis of problem behavior: A Behavior Analysis, 32(2), 175–183. https://​doi.​org/​10.​
quantitative and qualitative review. Journal of Applied 1901/​jaba.​1999.​32-​175
Behavior Analysis, 53(1), 44–66. https://​doi.​org/​10.​1002/​
jaba.​583

You might also like