ABA Visual Packet v6.1 (RBT)
ABA Visual Packet v6.1 (RBT)
Cooper, John O., Timothy Heron, William Heward. Applied Behavior Analysis, 2/e. Pearson
Learning Solutions, 07/2012. VitalBook file.
This file was designed for the purpose of being a study aid for the Registered Behavior
Technician (RBT) exam (based on the 2th edition task list) and should not be used as a sole
resource for studying.
This file and its content are property of William Slusser MS, BCBA, COBA. This file and its
content was initially distributed free of charge, but please ask for permission before further
reproduction or distribution ([email protected]). THANKS!
Page 1
7. Avoid overthinking
Questions are written strategically, literally, and objectively
During conditioning = The food (US) and the tone (NS) is presented at the same time, or nearly the same time, and
salivation occurs (UR).
UR
US NS
After conditioning = Once the food (US) and the tone (NS) are paired/conditioned, remove the food (US) and present
only the tone (previously a NS) and salivation occurs. The tone (now a CS) now elicits the same behavior as the food (US)
thus the behavior when the tone alone is presented is a CR (it has been taught and is under the control of the CS).
UR CR
US CS
You see a red traffic light (A) You stop your car (B) You don’t get in an accident (C) You continue to stop at red
traffic lights
Receiving reinforcement (or punishment) is contingent on performing a specific behavior when the antecedent stimulus
is present.
2 Term Contingency = Looks at an antecedent stimulus and the behavior which follows it.
Antecedent Unconditioned
Unconditioned
Behavior Response (UR)
Stimulus (US)
Food is Food is
Eat the food ALSO You salivate
presented presented
3 Term Contingency = Looks at an antecedent stimulus, the behavior which follows the antecedent, and the
consequence which follows the behavior.
Antecedent Consequence
Behavior
Food is No longer
Eat the food deprived of
presented
food
4 Term Contingency = Looks at the motivating operation, the antecedent stimulus, the behavior which follows the
stimulus, and the consequence which follows the behavior.
Q: What is an MO?
A: An internal or external event which makes a specific stimulus or behavior more/less valuable based on the present situation.
It DOES NOT MEAN reinforcement is actually available.
You are driving home from work and smell something delicious. Since Food deprivation / being hungry Smelling the food
it is nearly dinner time, you are hungry. You stop and get food. (Adds value to getting food) (Signals you are able to get food)
You have a fear of public speaking but you have to present a Fear of public speaking Given the option to delay your
quarterly report to your co-workers and supervisors today. Just as (Adds value to avoiding the situation) presentation.
you’re walking into the conference room, you see someone else (Signals you are able to avoid the aversive
situation)
setting up their presentation. Your supervisor asks you if you want to
present today or should they let the other person present instead.
You finished your meal at a restaurant and the waiter asks you if you You already ate and are full Being given the bill
want desert. You say “no” because you’re full. The waiter brings you (Lessens the value of food) (Signals you are able to leave and do something
else)
the bill and you leave.
You are driving on the highway and notice a gas station with really low You don’t need gas Seeing low gas prices
gas prices. You look at your gas gauge and see you don’t need gas so (Lessens the value of getting gas) (Signals you could have spent less on gas)
you continue driving.
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 10
MOs, SDs, and S-Deltas
Motivating Operations (MO) = are events (internal or external) which make specific stimuli or behaviors more/less valuable based on the
present need.
• Establishing Operations (EO) = are a type of motivating operation events which make specific stimuli or behaviors more valuable
based on the present need.
Examples: Depravation of food makes food more valuable. See a house on fire makes yelling “Fire!” more valuable. Being low on
fuel in your car makes going to the gas station more valuable.
• Abolishing Operations (AO) = are a type of motivating operation events which make specific stimuli or behaviors less valuable based
on the present need.
Examples: Food less valuable after eat. Going to a car dealership is less likely after just buying a new car
Discriminative Stimuli (SD) = are stimuli (changes in the environment) which signal the availability of reinforcement.
Example: Seeing a gas station sign when need fuel for your car.
S-Deltas (SΔ) = are stimuli (changes in the environment) which signal the unavailability of reinforcement.
Example: Seeing a “Closed” sign on the door of your favorite restaurant when you are hungry.
Motivating
Operation (MO)
Current Moment
Future
Estabilishing Abolishing
Operation (EO) Operation (AO) Function-Altering
Effects
-INCREASES -DECREASE
‘
Value-Altering Behavior-Altering Value-Altering Behavior-Altering
- INCREASES - INCREASE the - DECREASES - DECREASE the
value of reinforcer rate of Bx value of reinforcer rate of Bx
Examples:
Value-Altering (EO): When you are hungry, food has high value in the moment
Value-Altering (AO): After you eat, food has low value in the moment
Behavior-Altering (EO): When you are hungry, you take the long way home to get the food
Behavior-Altering (AO): After you eat, you go directly home and not stop to get food
Function-Altering: You work out at the gym too hard and experience muscle pain, so you engage in behaviors in the current moment to
decrease that pain (take medication). You taking medication is the behavior-altering effect. In the future, you will reduce the behavior that
led you to the pain in the first place (working out to hard). Your future reduction of working out is a function-altering effect.
Independent Variables: The factor in an experiment which is manipulated while all other factors are held constant. Multiple independent
variables can be manipulated but should be manipulated only one at a time.
Dependent Variables: The factors in an experiment which are measured. Multiple dependent variables can be measured at the same time.
Examples: Rate, Frequency, and Duration of Behavior(s), Rate of Skill Acquisition, Percent of Correct Responding
Functional Relation: When an experiment yields measurements which indicate that a change in independent variable reliably produces a
specific change the dependent variable AND the specific change in the dependent variable was not like a result of other factors
(confounding variables).
Functions
Attention
Functions
Sensory
Access to a tangible Escape or avoidance
item of social situations
Escape or avoidance
Access to an activity of sensory
stimulations
Access to sensory
stimulation
Access to automatic
reinforcement
Access to control of
a situation
A behavior emitted in the presence of a signal which delays or prevents the onset of an aversive
Discriminated Avoidance
stimulus or condition, which results in reinforcement (avoidance being the reinforcer).
Example: I look at the clock and notice I am going to be late for work. I call my boss and tell him
I’m running late. He said he appreciates the call and I won’t be in trouble. From then on, every
time I’m running late, I call him and avoid getting in trouble.
A behavior emitted any time prior onset of an aversive stimulus or condition which delays or
Free-Operant Avoidance prevents the onset of the aversive stimulus or condition, and results in reinforcement (avoidance
being the reinforcer).
Example: I go to the gas station to get gas for my lawn mower, to avoid running out of gas while
mowing. I didn’t check the mower and don’t actually know if I need gas for it at this very moment
but I know I will need gas at some point. Better to be safe than sorry.
Yes No
3. Will the behavior increase the person’s access to environments in which other important behaviors can be learned or used?
4. Will the behavior create situations where others will interact with the person in a more appropriate and supportive manner?
7. Is there a desirable or adaptive behavior will replace the behavior being targeted for reduction or elimination?
8. Does the behavior represent the actual problem/achievement goal (vs being indirectly related)?
9. Are we targeting the actual behavior of interest and not just the person’s verbal behavior(s)?
10. Will there be target behavior(s) selected to produce the desired results or state (if the goal is not a specific behavior)?
Totals ->
Source: Cooper, J. O., Heron, T. E., Heward, W. L. (07/2012). Applied Behavior Analysis, 2/e Vitalsource [VitalSource Bookshelf version]. Retrieved from
https://bookshelf.vitalsource.com/books/9781256844884
Shaping within response topography class vs Shaping across response topography class
Key points: At first, Jim throws the ball two feet. After a week of practice, Jim now throws the ball five feet. After
Topography stays the same a month of practice, Jim is able to throw the ball twelve feet.
Shaping within
Dimension of behavior is shaped
response topography
(Increase of decrease in duration,
class
rate, magnitude, etc.)
Key points: At first, Jim throws the ball under hand but his behavior is shaped to throwing the ball overhand.
Topography changes
Shaping across
Behaviors are in the same
response topography
response class
class
Vocal
Evokes
“Dog”
Response
Vocal
“Dog”
Response
Vocal
“Not Dog”
Conclusion: A variety of stimuli within the same class, but not a variety of stimuli which are outside the
class, control the response behavior.
The concept of “dog” has been taught because the student’s response behavior (saying “dog”) is evoked
by multiple stimuli of the same class (generalization) and is not evoked by multiple stimuli of the
different class (discrimination).
Definition Source: Cooper, J. O., Heron, T. E., Heward, W. L. (07/2012). Applied Behavior Analysis, 2/e. Retrieved from
https://bookshelf.vitalsource.com/books/9781256844884
Sample Field
Simple Discrimination:
3-term contingency = Discriminative Stimulus (SD) -> Behavior / Response -> Consequence
Client is given a field of three shapes, containing a red square, a blue triangle, and a green circle.
During the first trial, you present a sample (controlling stimulus), a blue triangle. The client selects the identical blue triangle
from the field (taught behavior). The client receives reinforcement. During the second trial, you then present another sample
(novel stimulus), a red square.
The client does not select the blue triangle, but instead selects the red square from the field. The client receives
reinforcement.
Sample Field
Trial #1
Sample Field
Trial #2
Conditional Discrimination:
4-term contingency = Conditional Stimuli -> Antecedent Stimuli -> Behavior / Response -> Consequence
You have a green light and a red light. You teach the client to select the identical shape to the sample when the green light is on
and to select anything BUT the identical shape to the sample when the red light is on. You present the client with a field of
three shapes, containing a red square, a blue triangle, and a green circle. During the first trial, you turn on the green light and
present the sample, a blue triangle. The client selects the identical blue triangle from the field. The client receives
reinforcement. During the second trial, you turn
on the red light and present the sample, a red square. The client again selects the blue triangle from the field. The client
receives reinforcement.
Sample Field
Trial #1
Sample Field
Trial #2
No Trend
Data path trend can be determined by calculating the split-middle line / trend line of the data Note: “Gradual” and “steep” can also be used to describe downward
trends as well
Behavior Contrast
Behavior Contrast = When changes in behavior outside the training environment to be the opposite of the changes produced in the training environment.
IMPORTANT NOTE:
Behavior contrast considers to be a side effect punishment.
Behavior contrasts typically occur across settings but can also occur across individuals.
Accurate: NO
(Data points fall far from the true value)
Reliable: YES
(Data points fall consistently near each other) Accurate: YES
Valid: YES (Data points fall near the true value)
(Measurement measures what it is intended to measure) Reliable: YES
(Data points fall consistently near each other)
Valid: YES
(Measurement measures what it is intended to measure)
Accurate: NO
(Data points fall far from the true value) Accurate: POSSIBLE
Reliable: NO (True value cannot be determined with this measurement alone)
(Data points do not fall consistently near each other) Reliable: YES
Valid: YES (Data points fall consistently near each other)
(Measurement measures what it is intended to measure) Valid: NO
(Measurement doesn’t measure what was intended to measure)
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 28
Details of Reinforcers
Reinforcers = A stimulus that when presented as a consequence, as a result of a specific behavior,
increases responding in the future.
Primary reinforcers = Stimuli (Tangible objects, activities, or privileges) which are reinforcing to the
person from birth and do not need to be conditioned, paired, or taught.
Secondary reinforcers = Stimuli (Tangible objects, activities, or privileges) which are reinforcing to the
person but first need to be conditioned, paired, or taught.
Generalized conditioned reinforcers = Stimuli which are used to purchase backup reinforcers. The
stimuli can also have reinforcing value.
Backup reinforcers = Stimuli (primary or secondary reinforcers) which are “purchased” in exchange for
generalized conditioned reinforcers in a token economy.
Unconditioned negative reinforcers = Stimuli which when removed, serve as reinforcement. Like
primary reinforcers, the removal of the stimulus is reinforcing to the person from birth and do not need
to be conditioned, paired, or taught.
Examples: Shock, Extremely High/Low Temperatures, Loud Noises, Pressure Against the Body, Intense
Light, etc.
Conditioned negative reinforcers = Stimuli which when removed, serve as reinforcement. Like secondary
reinforcers, the removal of the stimulus is reinforcing to the person but first need to be conditioned,
paired, or taught.
Identifying Effective
Reinforcers
Negative Reinforcement = the removal of a stimulus which increases the future frequency, rate, and/or duration of the behavior.
Example: Bill was bitten by a mosquito and now his arm itches. He applies cream to alleviate the itching. The application of the cream
successfully alleviates the itching; the next time he is bitten by a mosquito he applies the cream immediately.
Bx
MO Antecedent Behavior Consequence Result
Function
Bill got bitten by a
…applies cream to …cream successfully Applies the cream
mosquito … (Adds …now his arm
alleviate the alleviates the Escape the itching immediately next
value to the anti- itches…
itching…. itching… time
itch cream)
Adding the cream to
Negative Reinforcement
remove the itch
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 31
Negative Reinforcement
Escape vs Avoidance
Escape When an emitted behavior provides alleviation (an “escape”) from a current aversive stimulus or
condition, which results in an increase in the frequency of the behavior in the future.
Example: You are stuck in traffic on the highway. You take the second exit you see and make it to work
on time. The next time you are stuck in traffic, you take the first exit you see.
Avoidance When an emitted behavior delays or eliminates (to “avoid”) a future anticipated aversive stimulus or
condition, which results in an increase in the frequency of the behavior in the future.
Example: You watch the news before heading to work. The news reports heavy traffic on your typical
highway route. You take the city streets instead. The next day, you take the city streets as well.
Behavior (B)
Aversive stimulus is Aversive stimulus is Reinforcement
anticipated avoided
Arriving home with Avoidance Math homework is
math homework delayed
presented
Antecedent (A) Consequence (C)
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 32
Automatic Reinforcement and Punishment
EXAMPLE: The client is given his favorite toy every time he cleans his room independently.
With this contingency, he now cleans his room more often than before. (Positive Reinforcement)
EXAMPLE: The client’s favorite toy is removed when even he engages in hitting behaviors. With
this contingency, he now is hitting people less often. (Negative Punishment)
EXAMPLE: The client engages in hitting behaviors to get the person’s attention. The person
ignores him and he doesn’t receive the attention he desires. He now is hitting people less often
because it is no longer providing reinforcement.
NOTE: Behavior is still allowed to occur but maintaining reinforcement is no longer present.
You CANNOT put a behavior which has not been previously reinforced on extinction.
Schedules of Reinforcement
Most Reinforcement is not given Most
Least Reinforcement is given Least
Basic Variations of
Schedules Basic Schedules
Schedules of
FR Progressive
Differential
Reinforcment of
Rates of Responding
FI
DRH
VR
DRL
VI
(FR) (FI)
*Post-Reinforcement Pause* *Post-Reinforcement Pause*
…changes or is an average
(Variable)
Variable Ratio Variable Interval
(VR) (VI)
Fixed Ratio = Client must emit a set number of correct responses or occurrences of the correct behavior, in order to
receive reinforcement (FR5 = every fifth correct response is reinforced)
• Has post-reinforcement pause
o The larger the ratio, the longer the post-reinforcement pause
o The smaller the ratio, the shorter the post-reinforcement pause
• Produces high rates of responding because the more quickly the client reaches the criteria to earn reinforcement,
the sooner the client receives the reinforcement
Fixed Interval = The client receives reinforcement for the first correct response or first occurrence of the correct
behavior, after a set amount of time has passed. (FI5 = the first correct response after 5 minutes is reinforced)
• Produces slow to moderate rates of responding
Variable Ratio = Client must emit an unspecified number of correct responses or occurrences of the correct behavior, in
order to receive reinforcement
• Number indicates the average number of responses or occurrences of the correct behavior, in order to receive
reinforcement (VR5 = average/about of every 5 correct responses was reinforced)
• The strongest basic schedule intermittent reinforcement
• Produces high rates of responding because the more quickly the client reaches the criteria to earn reinforcement,
the sooner the client receives the reinforcement
Variable Interval = The client receives reinforcement for the first correct response or first occurrence of the correct
behavior, after an unspecified amount of time has passed.
• Number indicates the average amount of time which must pass in order to receive reinforcement (VI5 = after an
average of 5 minutes – could be seconds or hours as well – the next correct responses was reinforced)
• Produces slow to moderate rates of responding
Continuous
Reinforcement
(CRF)
Intermittent
Compound
Schedules of
Extinction (EXT) Schedules of
Reinforcement (FR,
Reinforcement
VR, FI, VI)
Differential
Reinforcement of
Various Rates of
Responding (DRH,
DRL, DRD)
Densening a schedule = gradually decreasing the number of responses in a ratio schedule or gradually decreasing the duration
of the time in an interval schedule until it is similar to the natural environment schedule / contingency.
Reason to have a denser schedule: Prevent ratio strain issues
Ratio strain = Making too big of a schedule change when moving from a dense schedule to a thin schedule too quickly.
EXAMPLE: Client is successfully earning reinforcement on a FR5 schedule, so you attempt to thin the schedule by moving to a
FR10 schedule. The client now refuses to work for reinforcement.
The degree to which a thinning schedule change can be made, while avoiding ratio strain, WILL VARY from client to client.
REMEMBER!!!
THINNING an intermittent schedule occurs when the number is INCREASED!
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 38
Differential Reinforcement
Type of Differential Reinforcement Basic Definition Example Subtypes
Differential Reinforcement of Student can engage in ANY OTHER behavior Target behavior: Kicking peers 4 Subtypes
Other Behavior(s) (appropriate or inappropriate) besides the Other behaviors: Hitting peers, crying loudly, FI-DRO, VI-DRO, FM-DRO, and
(DRO) behavior being targeted to earn reinforcement asking to play, doing homework VM-DRO
Differential Reinforcement of Target behavior: Kicking peers
Student can engage in SPECIFIC behavior besides
Alternative Behavior(s) Alternative behavior: Asking to play N/A
the behavior being targeted to earn reinforcement
(DRA)
Differential Reinforcement of Student can engage in SPECIFIC behavior, which is Target behavior: Kicking peers
Incompatible Behavior(s) incompatible with the behavior being targeted, to Incompatible behavior: Kicking a ball N/A
(DRI) earn reinforcement (can’t kick a ball and peers at the same time)
Differential Reinforcement of Low Student engages in SPECIFIC behavior at a rate Decrease the number of bites per minute 3 Subtypes
Rates of Responding lower than baseline DUE TO INCREASED by increasing the time between bites Full-Session, Interval, Spaced-
(DRL) INTERRESPONSE TIME (IRT), to earn reinforcement (Does not eliminate the behavior) Responding
Differential Reinforcement of High
Student must engage in SPECIFIC behavior at a Increase the number of times the student 2 Subtypes
Rates of Responding
rate higher than baseline to earn reinforcement raises their hand during math class Full-Session and Interval
(DRH)
Differential Reinforcement of Decrease the number of times the student
Student must engage in SPECIFIC behavior at a 2 Subtypes
Diminishing Rates of Responding yells out in math class
rate lower than baseline to earn reinforcement Full-Session and Interval
(DRD) (Does not eliminate the behavior)
IMPORTANT NOTES:
• Replacement behavior(s) should serve the same function as the target behavior(s) they are replacing.
• DRI and DRA could be considered more specific forms of DRO
• DRD and DRL do not eliminate the target behavior. They simply make the behavior occur less frequently.
o DRO, DRA, and DRI do eliminate the target behavior
• DRL lowers the rate of the behavior by increasing the time between responses (increases the inter-response time; IRT)
• DRH and DRD are direct opposites; DRH increases rates of behavior while DRD decreases rates of behavior.
• DNRI and DNRA are similar to DRI and DRA but specifically use negative (the “N” in DNRI & DNRA) reinforcement (escape from a task as a SR+, for
example).
• Reinforcement in DRO is based on the absence of the target behavior AND an interval.
o DRO has several sub-types; which include FI-DRO (Fixed-Interval DRO), VI-DRO (Variable-Interval), FM-DRO (Fixed-Momentary DRO), and VM-
DRO (Variable-Momentary DRO).
o FI, VI, FM, and VM indicate specifically when reinforcement should be given based on an interval
Differential Reinforcement of Low Rates (DRL) = Student engages in a SPECIFIC behavior at a rate lower than baseline DUE TO
INCREASED INTERRESPONSE TIME (IRT), to receive reinforcement
You would use DRL when you want to, or able to, increase the amount of time between the behaviors.
EXAMPLE: Your client is eating too quickly and often chokes. You want to increase the amount of time between bites. This decreases the
rate of bites per minute. You could use a DRD procedure (setting the criteria to receive reinforcement lower than baseline – eat 5 bites per
minute instead of 10 bites per minute) but the client could still eat those 5 bites during the first 5 seconds of the minute (still too quickly)
and receive reinforcement.
Differential Reinforcement of Diminishing Rates (DRD) = Student must engage in a SPECIFIC behavior at a rate lower than baseline to earn
reinforcement
EXAMPLE: During the baseline condition, your client yells out an average of 8 times in an hour in the classroom setting. You set the criteria
to receive reinforcement at 4 or less yelling behaviors in an hour. Your client yells out only 3 times and receives reinforcement.
You would use DRD with a variety of behaviors. ETHICS WARINING: If you are using DRD with seriously aggressive or dangerous
behaviors, other procedures such as modeling, DRA, DRI, DRO and/or extinction should be used in conjunction with DRD.
Reinforcement is given at the end of the session if the amount/rate of the target behavior during the
Full-Session entire session occurs equal to or less than the preselected criteria. The preselected criteria is based on
DRL* the rate observed during baseline.
DRL
Subtypes Reinforcement is given at the end of the interval if the amount/rate the target behavior during the
Interval interval occurs equal to or less than the preselected criteria. The preselected criteria is based on the
DRL* rate observed during baseline.
Reinforcement is given after the target behavior occurs if the target behavior occurs after a
Spaced-
preselected amount of time. The preselected criteria is based on the IRT observed during baseline.
Responding DRL
DRO Subtypes
Differential Reinforcement of Other Behavior(s) (DRO) = Client can engage in ANY OTHER
behavior (appropriate or inappropriate) besides the behavior being targeted and earn
reinforcement.
Fixed
(FI-DRO) (FM-DRO)
Two subtypes:
Under the Differential Negative Reinforcement of Other Behavior(s) umbrella
Asking for a break from the difficult task is an alternative behavior to throwing and ripping the work
materials AND allows escape from the present aversive stimulus (the difficult task).
Asking to eat in the classroom is an alternative behavior to yelling (both cannot be emitted at the same
time) AND allows avoidance from an anticipated aversive stimulus (noise in lunch room).
Measurements
Continuous Discontinuous
Mesurement Procedures
HINT: No breaks in data collection
Mesurement Procedures
***CAN ALSO BE DISCONTINUOUS DEPENDING ON USE*** HINT: Breaks in data collection
Temporal Loucs Temporal Extent Partial Interval Recording Whole Interval Recording Momentary Time Sampling
Repeatability (Point in time) (Amount of time) (Used to lower rates of Bx) (Used to raise rates of Bx) (Overestimates OR
Loucs = "Location" in time Extent = "Amount" of time (Overestimates Bx Count) (Underestimates Bx Count) Underestimates Bx Count)
Duration-per-occurrence
Celeration
(Amount of time of EACH
(Count per time over time)
Bx occurrance)
Type / Name Basic Definition Behavior A (High rate) Behavior B (Low Rate)
Rate Data Collected: 2 of 13 intervals (15%) Rate Data Collected: 2 of 13 intervals (15%)
One occurrence of the behavior is
Whole Interval Actual: 12 of 13 intervals (92%) Actual: 6 pf 13 intervals (46%)
recorded only when the behavior
Recording
occurred for the entire (WHOLE)
(WIR)
interval Durational Data Collected: 10 of 65 minutes (15%) Durational Data Collected: 10 of 65 minutes (15%)
Actual: 45 of 65 minutes (69%) Actual: 26 of 65 minutes (40%)
Rate Data Collected: 11 of 13 intervals (85%) Rate Data Collected: 10 of 13 intervals (77%)
Actual: 12 of 13 intervals (92%) Actual: 6 of 13 intervals (46%)
Partial Interval One occurrence of the behavior is
Recording recorded if the behavior occurred at
(PIR) any time during the interval. Durational Data Collected: 55 of 65 minutes (85%) Durational Data Collected: 50 of 65 minutes (77%)
Actual: 45 of 65 minutes (69%) Actual: 26 of 65 (40%)
Rate Data Collected: 6 of 13 intervals (46%) Rate Data Collected: 5 of 13 intervals (38%)
One occurrence of the behavior is Actual: 12 of 13 intervals (92%) Actual: 6 of 13 intervals (46%)
Momentary
recorded only when the behavior
Time Sampling
occurred for the END of the interval
(MTS) Durational Data Collected: 30 of 65 minutes (46%) Durational Data Collected: 25 of 65 minutes (38%)
Actual: 45 of 65 minutes (69%) Actual: 26 of 65 (40%)
Same procedure as Momentary Time Examples:
Planned
Sampling (MTS), but used to collect Is everyone working on their math?
Activity Check Same procedure as Momentary Time Sampling (MTS)
data on small groups of students Are Jim, Tim, Jill, Bill, Kara, and Sara reading quietly?
(PLACHECK)
instead of individuals
Sequence Effect = When the client’s behavior is influenced in one condition due to the client’s
experience with a condition.
DO NOT BE CONFUSED WITH: Ripple effect and spillover effect which are related to or synonymous
terms for generalization across subjects.
Multiple Treatment Interference = When the effects of one treatment on the client’s behavior, effect
another treatment within the same experiment / design.
• Multiple treatment interference must always be suspected in the alternating treatments design.
• By limiting an alternating treatments design to one treatment condition (preferably the most
effective treatment), we can evaluate the effects of the treatment in isolation.
Spontaneous Recovery = A phenomenon commonly associated with the extinction process is the reappearance of the behavior after it has
diminished to its prereinforcement level or stopped entirely (Cooper, 07/2012, p. 462)
Resistance to Extinction = Behavior analysts refer to continued responding during the extinction procedure as resistance to extinction.
Behavior that continues to occur during extinction is said to have greater resistance to extinction than behavior that diminishes quickly.
Resistance to extinction is a relative concept. (Cooper, 07/2012, p. 463)
Source: Cooper, J. O., Heron, T. E., Heward, W. L. (07/2012). Applied Behavior Analysis, 2/e. Retrieved from
https://bookshelf.vitalsource.com/books/9781256844884
Irreversibility
Cooper says “A situation that occurs when the level of responding observed in a previous phase cannot be reproduced even though the
experimental conditions are the same as they were during the earlier phase” (Cooper, 07/2012, p. 698).
It is a common misconception that “irreversibility” refers to the actual behaviors that, once learned, cannot be ‘unlearned’. THIS IS
INCORRECT!! Irreversibility is related to data; the level of responding cannot be reproduced after the behavior is taught.
Indirect Assessment = You are collecting data after the fact / You do not “directly” see the behavior occurring
Direct Assessment = There is a “direct” permanent product or you “directly” see the behavior occurring
Indirect Direct
Assessments Assessments
Although both types of assessment can provide Behavior Analysts with valuable information, the subjective nature of indirect assessments
make them less favorable to direct assessments.
Data is collected only when behaviors of interest Data is collected on occurrences of the behaviors
are observed. and environmental events in the natural routine
during a set period of time.
Question 1:
Is there a verbal SD?
Yes No
Queation 2: Question 2:
Is there point-to-point Is the antecedent vaiable MO
correspondence? or Nonverbal SD?
Yes No Nonverbal SD MO
Intraverbal
Question 3: Tact Mand
(Neither Duplic nor Codic)
Is there formal similarity? (Neither Duplic nor Codic) (Neither Duplic nor Codic)
Can have formal similarity
Yes No
Property of William Slusser M.S., BCBA, COBA
(Duplic) (Codic) Please do not reproduce or distribute without permission, [email protected]
Echoic Question 4:
Imitating Signs Written word to spoken
(Duplic) word
Yes No
Textual
Transcription
(Codic)
(Codic)
Written word to spoken
word Spoken word to written word
Page 54
Duplic Codic
Has P-T-P and FS Has P-T-P but not FS
“Bird” Bird
Bird
“Bird”
Based on information found at http://peakinterventions.com/what-are-the-operants/ Created by: William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 55
Point-to-Point Correspondence =
SD and response have the same characteristics/parts AND have two or more components
Formal Similarity =
SD and response have the same delivery method / sense mode
A PP LE A PP LE A A
C-A-R C-A-R
3 components 1 component 3 components
3 components
C-A-R C-A-R
CH E CK CH E CK DOT DOG
NOTE: Chaining DOES NOT occur if completing the prior action or behavior DOES NOT provide an SD for the next action or behavior.
Stimulus Prompt
Stimulus Prompt: Makes the antecedent stimulus more salient (stand out) which evokes a correct response
Forms: Position, Movement, Redundancy, Within-Stimulus, Extra-Stimulus
Position Movement* Redundancy
Change the position to place the correct response Pointing to, tapping, touching, looking at, or moving Have one or more stimulus or response dimensions
closer to the client to make it more salient the correct response to make it more salient (color, size, shape) paired with the correct response
SD: “Touch Dog” SD: “Touch Dog” SD: “Two plus two equals”
Prompt: The dog card is placed closer to the client Prompt: You tap the dog card to make it stand out Prompt: You paired the color green to the correct
R: Client touches “DOG” R: Client touches “DOG” response
R: Client touches “4”
CAT BOY
4 2 6
DOG CAT BOY
DOG
Within-Stimulus Extra-Stimulus
Change the position, size, shape, color, or intensity of the Add something (extra) to the correct response to make it
correct response to make it more salient more salient
SD: “Touch Dog” SD: “Touch Dog”
Prompt: You made the dog card bigger than the others Prompt: You add stars the dog card
R: Client touches “DOG” R: Client touches “DOG”
DOG
IMPORTANT NOTES:
- A prompt is only a prompt if it evokes a correct response.
- A prompt can occur either before or during a behavior or response. I prompt cannot occur after a behavior or response.
- The movement prompt MUST make the antecedent stimulus more salient to be considered a stimulus prompt!! If the
movement prompt is acting on the desired behavior, it is considered a response prompt!!
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 64
Stimulus Prompt Removal
Two methods: Fading and Shaping
Fading Shaping
Fading Position Fading Movement
MORE MOST
MORE
CAT BOY
DOG CAT BOY RED
DOG
DOG
CAT BOY
DOG CAT BOY DOG RED
RED
DOG CAT BOY
LESS
DOG
DOG CAT BOY LESS
DOG
CAT BOY
DOG CAT BOY
DOG RED
CAT BOY DOG CAT BOY
DOG
DOG LEAST
RED
DOG CAT BOY
DOG CAT BOY
LESS LESS
Response Prompt
Response Prompt: Acts on the behavior or response itself which evokes a correct response
Forms: Verbal (which includes vocal and non-vocal), Modeling, Gestural, and Physical Prompting
Verbal (Vocal) Verbal (Non-vocal) Modeling
Oral Written, Signing, Picture Instructions Demonstrate the desired behavior or response
D
S : “Touch Dog” SD: “Touch Dog” SD: “Touch Dog”
Prompt: You say “The one on the left” Prompt: You give the client a note saying “on the left” Prompt: You touch the dog card
R: Client touches “DOG” R: Client touches “DOG” R: Client touches “DOG”
IMPORTANT NOTES:
- A prompt is only a prompt if it evokes a correct response.
- A prompt can occur either before or during a behavior or response. I prompt cannot occur after a behavior or response.
- The gestural prompt MUST act on the desired behavior!! If the gestural prompt makes the antecedent stimulus more salient,
it is considered a stimulus prompt!!
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 66
Response Prompt Removal
Four methods: Most-to-Least, Least-to-Most, Graduated Guidance, and Delayed Prompting
Planned Ignoring = A non-exclusionary time-out procedure in which the behavior can still allowed to occur
but the social reinforcers should as attention, physical contact, or verbal interaction are briefly
removed.
EXAMPLE: The client often engages in yelling behaviors to gain attention from other. During your
session, anytime the client engages in the yelling behaviors, you ignore and do not provide the desired
attention.
Prompt Dependence
Prompt dependence: When the subject will not perform a behavior, skill, or activity until they are provided with a prompt.
Occurs when prompts are not systematically faded and reinforcement schedules are not thinned as the behavior, skill, or activity is
acquired.
Prompt dependency should not be confused with prompts which are required to initially teach a behavior, skill, or activity.
The term “prompt dependent” suggests that the individual is somehow requires prompts to be able to initiate or continue specific activities
or steps of a task. However, this formulation has two fatal flaws, first it places the explanation for this “dependence” within the individual
(they are dependent) rather than in the environment (consequences maintaining the behavior). Second the behavior of concern is not
correctly described as “dependence”, that is not a behavior, rather it is a characterization of how we “feel” about some other behavior. It
is the actual behavior that the person engages in that we must turn our attention and our analysis. The actions (behaviors) we must concern
ourselves with are the acts of stopping or ceasing the actions that are desired. This should not be confused with the absence of behavior,
as the topography of response can be specified and may take a number of forms that can be operationally defined.
Source: http://www.behaviorpedia.com/conceptual-issues/prompt-dependence/
Major characteristics/component:
• Involves a carefully designed curriculum based on task analyses.
• Used in teaching small groups.
• Utilizes continuous interactions between the group and the teacher.
• Utilizes fast-paced teaching, scripts, and choral responding of the group.
Direct
• When preventing and correcting errors, direct instruction involves graphing errors, prompting the group to use multi-step strategies, and supplying correct answers for
Instruction
discrimination tasks.
(DI)
• Student learning is the responsibility of the teacher’s design and method of delivering instructions.
Major characteristics/component:
• Trials have: SD, prompts, responses (client’s behavior), reinforcers to reinforce the client’s behavior, and inter-trial intervals (momentary breaks between trials).
• For incorrect response: The teacher say "No" and ends that trial. The next trial, the teacher presents the SD and then a prompt.
• Collected data is recorded as percentage correct.
• Teaches discriminated operants.
• DTT is very controlled and restricted.
• “Discrete” meaning trials have well-defined beginnings and endings.
• DTT has four methods of introducing new targets; Mass trial, Block trial, Expanded trial, and Random Rotation
Discrete Trial • Mass trial involves a single SD for the trial (with prompts as needed). Client must reach at least 80% correct responding with a neutral distracter, before introducing a
Training new target.
(DTT) EXAMPLE: Teacher says “Touch ball”. Student touches the crayon. Teacher says “No” and ends the trial. The next trial the teacher says “Touch ball” again and points
to the car. The student touches the car. Over several sessions, the client is responding 95% of the trials correctly. The teacher then introduces the SD “Touch car”
• Block trial involves presenting single SD for a specific “block” or group of trials, then another SD, different from the first, is presenting for another specific “block” or
group of trials.
EXAMPLE: Teacher says “Touch ball”. Student touches the ball. The next trial the teacher says “Touch ball” again. Student again touches the ball. For the third trial,
the teacher says “Touch car”. Student touches the car. For the last trial, the teacher says “Touch car” again. Student again touches the car.
• Expanded trial involves presenting distracter trials between the acquisition trials. Distracter trials should contain mastered targets.
EXAMPLE: Teacher says “Touch ball”. Student touches the ball (target in acquisition). The next trial the teacher says “Touch car”. Student touches the car (previously
mastered target). For the third trial, the teacher says “Touch crayon”. Student touches the crayon (previously mastered target). For the last trial, the teacher again
says “Touch ball” again. Student again touches the ball (target in acquisition).
• Random rotation is similar to expanded trial but distracter trials and acquisition trials are presented at random.
Page 70
Major characteristics/component:
• Also known as Naturalistic Teaching and In-situ Training
• Occurs in the natural environment, at any time.
Incidental
• Involves training loosely by using client selected reinforcers.
Teaching
• Involves indiscriminable contingencies. The client is unsure which trials will yield reinforcement and which trials won’t
• Helps facilitate natural generalization.
• Involves using motivational operations (MOs) to teach verbal operates by required requesting for items in the natural environment, under natural occurring contingencies.
Developed by Ogden Lindsey
Major characteristics/component:
Precision • Individualized
Teaching • Utilizes the standard celeration chart; noticeably different from a line graph by its semi-logarithmic scaling of the y-axis.
(PT) • Performance of the student is the teacher’s responsibility (The learner knows best).
• Observable and measureable behaviors only. No private events.
• Measures performance by rate and frequency, not percentage correct (hence the use of the standard celeration chart).
• Fluency is measured by collecting data on response accuracy and speed of responding.
Developed by Fred Keller
Major characteristics/component:
• Also known as the Keller plan
• Created for college classrooms but is also used in high schools.
Personalized
• Personalized and self-paced
System of
• Unit mastery criteria typically set at 90%.
Instruction
• Material is divide into self-contained sections.
(PSI)
• Student cannot continue to the next section without meeting the mastery criteria of the previous section.
• Utilizes written materials alone (No formal lectures to teach acquisition of the material).
• Students in more advanced sections of the material, grade and provide feedback to students who are in the previous sections of the material.
• Lectures are used as reinforcers for students who reach a pre-determined mastery criteria of the written materials.
o Lectures do not teach acquisition of the material but are enjoyable and entertaining, and are thus considered reinforcers.
Jim Jay
Logic: The opportunity to engage in the high-probability behavior severs as a reinforcer for
engaging in the low-probability behavior
If the client is unable to watch TV for a month, this in theory, would increase the value (EO) of
watching TV once able to.
EO
MANY ETHICS CONCERNS FOR USING!!!
All behaviors (both low and high-probability) must be in the client repertoire.
Token economies often include a token loss contingency when rules are broken or when the client engages in inappropriate behaviors.
GUIDELINES FOR TOKEN LOSS: Behaviors involved in the token loss contingency (response cost) should be defined and clearly stated in
the rules when the token economy is introduced. Client should also to be made aware of what inappropriate behaviors will yield a loss in
tokens as well as how much each inappropriate behavior will cost (the more severe the inappropriate behavior, the greater the token loss).
Token loss should not be use if the client does not have any tokens loose. This is because the client should not go into “token debt” (i.e. the
client should always receive more tokens than they loses). “Token debt” is likely to decrease the reinforcement value of the tokens and thus
should be avoided.
Real world / Everyday Example: Money!! Income earned at your job can be exchanged at a later time for backup reinforcers (food, clothing,
transportation, entertainment, vacation, house, etc.).
Level System = A type token economy where the client can move up or down a series of “levels”. The promotion or demotion through the
levels is contingent upon meeting specific performance criteria of the target behaviors. Each promotion to a higher level yields access to
more privileges / more powerful reinforcers AND the expectation to meet more independent performance criteria (the higher the level,
the thinner the schedules of reinforcement. The highest level has schedules of reinforcement which are similar to those in the natural
environment).
Behavior contracts
When developing a behavior contract:
• Choose behaviors which are already in the student’s repertoire
• Choose behaviors which will result in a permanent product
• Choose a reward which will overcome the response effort
• Choose wording which is easily understandable to the client
• Make it complete (no loop holes) but keep it simple
• Choose a reinforcement delay duration which is which within the client’s ability
• Make the contract written so it can be objectively referenced later
• After introducing it to the client, have them repeat it back (to check for understanding)
• Make the contract available to the client at all times (to promote self-management)
Time-Out Procedures
Name of
Type Location Procedure Scenario Example
Procedure
Student is placed in a location Jim hit another student during
Contingent where they can view/observe inside recess. He was then placed
Non-Exclusionary Room where the behavior occurred
Observation peers engaging in preferred at his desk for the remainder of
activities. recess.
Social reinforcers (attention) are Jim yelled during reading time to
Planned Ignoring Non-Exclusionary Room where the behavior occurred removed by using planned get peer’s attention. Everyone
ignoring. ignored his yelling.
Withdrawal of Tangible reinforcing items are Jim throws his favorite toy. The toy
Positive Non-Exclusionary Room where the behavior occurred taken away is taken away for the remainder of
Reinforcement (Toy gets “time-out”) recess.
Student has a ribbon to wear. During reading time, Jim wears the
Ribbon on = can earn ribbon and earns 5 minutes of
Time-Out Ribbon Non-Exclusionary Room where the behavior occurred reinforcement computer time for every 10
Ribbon off = cannot earn minutes he reads. During recess Jim
reinforcement does not wear the ribbon.
Jim hit another student during
Student is placed in a different
outside recess. He was then sent to
Entirely separate room from where the room, near the “time-in”
Time-Out Room Exclusionary the principal’s office, which is near
behavior occurred environment, for a duration of
his classroom, for the remainder of
time.
recess.
Jim hit another student during
Student is placed behind a physical
Separate area of the room where the inside recess. He was then placed
Partition Time- barrier/partition where they
Exclusionary behavior occurred, with a physical barrier in the “time-out” area, where he
Out cannot view/observer peers
in place (partition) can’t see his peers, for the
engaging in preferred activity.
remainder of recess.
Student is placed in the hallway, Jim hit another student during
Immediately outside the door of the immediately outside the door of inside recess. He was then placed
Hallway Time-Out Exclusionary
environment where the behavior occurred the room where the behavior in the hallway for the remainder of
occurred recess.
Stimuli
Contrived Mediating Stimuli: Stimuli which are used to control the behavior in the teaching environment, are then used in
novel/generalization environments to prompt the same behavior.
Contrived mediating stimulus can be physical objects or people but MUST meet BOTH of the following criteria:
Contingencies
Contrived Contingency: A contingency of reinforcement or punishment which is artificially created in the environment by a behavior analyst
or practitioner and is used to teach skill/behavior acquisition, maintenance, and generalization.
Naturally Existing Contingency: A contingency of reinforcement or punishment which is naturally occurring in the environment. May still be
used to teach skill/behavior acquisition, maintenance, and generalization but without the involvement of a behavior analyst or practitioner.
(Opposite to contrived contingencies)
Reinforcers
Contrived Reinforcers: “Events that are provided by someone for the purpose of modifying behavior.”
– Paul Chance, Learning and Behavior, 7th Edition, 2013
≠ “Apple”
Stimulus Generalization = When similar but novel/untaught stimuli evoke a taught behavior or response
(relaxed stimulus control).
You teach the client to say “apple” when you present a red apple . You then present a green apple , the client still says “apple”.
Stimulus
Response
“APPLE”
Response Generalization = When untaught but functionally equivalent behaviors or responses are emitted instead of the taught behavior
or response
You taught the client to say “Hello” when they arrive for a session. Today when they arrived for a session, the client said “Hi! How are you?”
Stimulus Response
Seeing you “Hey!”
“Hi!”
“What’s up?”
Overgeneralization = When novel/untaught stimuli (similar or non-similar) evoke a taught behavior or response which is inappropriate or
incorrect in the current situation or environment.
The client has been taught to say “car” when presented with a picture of a car . When the client is presented with a picture of an
apple , the client says “car”.
Property of William Slusser MS, BCBA, COBA
Please do not reproduce or distribute without permission, [email protected]
Page 80
More on Generalization
Setting/Situation Generalization = When the client engages in a target behavior in the presence of a novel environmental stimulus which is
similar but significantly different than the controlling stimulus used during acquisition (i.e. the client engages in previously taught
target behavior in a novel setting or situation).
Generalization Across Subjects = When there are changes in the behavior of other people (who are not directly be treated by the
intervention) due to the treatment contingencies being applied to the client. According to Cooper, Heron, and Heward, “vicarious
reinforcement (Bandura, 1971; Kazdin, 1973), ripple effect (Kounin, 1970), and spillover effect (Strain, Shores, & Kerr, 1976)” are all
related to or synonymous terms for generalization across subjects (Cooper, 07/2012, p. 622).
HOW TO REMEMBER: The BEHAVIOR of one person (or a few people) is being GENERALIZED to others.
EXAMPLE: I have ten people walk into a room. Nine of the ten people “work” for me. The one other person (the subject) doesn’t know
everyone else is working with me. I instruct the nine people to engage in a specific behavior (tapping their finger in a rhythmic pattern).
Soon the subject starts tapping his finger in the same rhythmic pattern.
Generalized Treatment
Effects
Program Common Stimuli = Making the instructional/teaching setting similar to the natural setting. The
same SDs should be present in both the instructional/teaching and natural setting
HOW TO REMEMBER: SDs are “Common” between settings.
EXAMPLE: Making the instructional setting look like a store (products, aisles, shelves, cashier, conveyor
belt) and teach the client how to select and purchase items.
Train Loosely = Making minor changes to the environment or intervention while still evoking the taught
behavior.
Changes may be made to people, materials, instructions, prompts, consequences, settings, and time. The
change is ignored by the client (i.e. doesn’t evoke a change in behavior).
HOW TO REMEMBER: Elements of the environment are “loosely” controlled.
EXAMPLE: Varying the order of the tasks for the session.
Multiple Exemplar Training = Varying the antecedent stimuli which evoke the taught behavior.
HOW TO REMEMBER: “Multiple” stimuli examples are provided.
EXAMPLE: Saying “Hi”, “Hello”, or “How’s it going?” and the client says “Hi!”
Mediation = Having people other than the person who was present during acquisition, maintain the
behavior.
ETHICAL WARNING: YOU, the BCBA, are still responsible for properly training and ensuring the
intervention is being used correctly by the “other people”.
HOW TO REMEMBER: You act as a “mediator” (Someone who acts as a link between parties)
EXAMPLE: Training and having the client’s teacher use a token economy, which is being utilized in the
instructional setting, in the classroom setting.
Negative Teaching Examples = Providing the client with settings and conditions which the behavior
should not be emitted. Also strengthens discrimination skills.
HOW TO REMEMBER: “Do not do” settings and conditions are provided.
General Case Analysis = Using differing stimuli in the client’s environment and teaching a variety of
appropriate responses/behaviors
HOW TO REMEMBER: “General”/Overall generalization
EXAMPLE: Teaching the client the differences between many different TV remotes AND teaching the
correct sequence of behaviors changing the channel for each different remote.
Alternating Treatments Design / Concurrent Schedule Design / Multielement Design / Multiple Schedule Design
(Cooper, 07/2012, p. 689)
High-probability (high-p) Request Sequence / Interspersed Requests / Pretask Requests / Behavioral Momentum.
(Cooper, 07/2012, p. 697)
Source: Cooper, J. O., Heron, T. E., Heward, W. L. (07/2012). Applied Behavior Analysis, 2/e [VitalSource Bookshelf
version]. Retrieved from https://bookshelf.vitalsource.com/books/9781256844884
Paired Stimulus Assessment Procedure / Forced Choice Procedure / Paired Choice Procedure (Miltenberger, p. 291)
Source: Miltenberger, R. G. (20110512). Behavior Modification: Principles and Procedures, 5th Edition [VitalSource
Bookshelf version]. Retrieved from https://bookshelf.vitalsource.com/books/9781285311012
Cooper, John O., Timothy Heron, William Heward. Applied Behavior Analysis, 2/e. Pearson
Learning Solutions, 07/2012. VitalBook file.
This file was designed for the purpose of being a study aid for the Registered Behavior
Technician (RBT) exam (based on the 2th edition task list) and should not be used as a sole
resource for studying.
This file and its content are property of William Slusser MS, BCBA, COBA. This file and its
content was initially distributed free of charge, but please ask for permission before further
reproduction or distribution ([email protected]). THANKS!