question and answer

  

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

______________________________________________________________________________

Pierce and Cheney (2017): ‘Focus on Generality’ section (pages 150-152) and pages 155 to 177 (the end of the chapter)

***Disclaimer: Chapter 5 of the Pierce and Cheney text was assigned in C&P I- so prior objectives related to basic schedules are considered expected, critical, and foundational knowledge (that is, they are ‘fair game’ for quizzes)

1. Describe why organisms who have vocal verbal behavior (language) and organisms who have a learning history of responding on ratio schedules do not typically produce scallop patterns on FI schedules. 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

2. State the definition of a progressive-ratio schedule of reinforcement- be sure to include what is meant by the term ‘breakpoint’ in your definition. Give a novel example (novel here is one not discussed in class). 

3. Describe how Progressive Ratio Schedules of reinforcement can be used to assess reinforcer efficacy and give an example provided by the text or provide your own. 

4. When given graphed data, state whether steady state or transition state is occurring. 

5. State the definition of ratio strain and what variable controls it. 

6. Provide an example given in the text of ratio strain or provide your own. 

7. Critical refresher* what is interresponse time (IRT)- 

8. Critical refresher* how is an operant defined

9. Describe the differences in IRTs observed on FR and VI schedules. 

10. State the definition of a molecular account of schedule performance

11. State why, according to the molecular view, there are IRT differences between ratio and interval schedules

12. State the definition of a molar account of schedule performance

13. State why, according to the molar view, why there are IRT differences between ratio and interval schedules

14. State that low rates of responding contact molecular contingencies and generate longer IRTs. High rates of response contact molar contingencies between rate of response and rate of reinforcement. 

15. State the definition of a postreinforcement pause (PRP) and the schedules that produce them. 

16. State the definition of an interreinforcement interval (IRI)

17. State what controls the PRP on FI schedules. How long would the PRP be on a FI 30-second schedule. 

______________________________________________________________________________

Schlinger, Derenne, and Baron (2008)

18. Describe what is meant by a break and run response pattern 

19. State 2 reasons why researchers are interested in pausing on FR schedules

20. What 4 variables affect the length of the pause on FR schedules 

21. What two variables influence pausing on progressive ratio schedules

22. Dscribe why some researchers would conclude that the postreinforcement pause would be better referred to as the preratio pause or between ratio pause 

23. State that PRPs are influenced by relative changes between worsening or improving reinforcement conditions such that longer pauses occur when the upcoming ratio is higher, requires more response effort, or produces a lower magnitude of reinforcer 

24. State that when pauses are observed on VR schedules they are controlled by the same variables as FR schedules (ratio size and reinforcer magnitude) 

25. Describe why it is hypothesized that some research has shown pauses on VR schedules but other research has not. 

26. State that research with humans has shown that pausing occurs on FR schedules and pause very little on VR schedules 

State two ways that procrastination (pausing) can be reduced 

What 50 Years of Research Tell Us About Pausing
Under Ratio Schedules of Reinforcement

Henry D. Schlinger
California State University, Los Angeles

Adam Derenne
University of North Dakota

Alan Baron
University of Wisconsin–Milwaukee

Textbooks in learning and behavior commonly describe performance on fixed-ratio schedules as
‘‘break and run,’’ indicating that after reinforcement subjects typically pause and then respond
quickly to the next reinforcement. Performance on variable-ratio schedules, on the other hand, is
described as steady and fast, with few long pauses. Beginning with Ferster and Skinner’s
magnum opus, Schedules of Reinforcement (1957), the literature on pausing under ratio
schedules has identified the influences on pausing of numerous important variables, in particular
ratio size and reinforcement magnitude. As a result, some previously held assumptions have
been called into question. For example, research has shown that the length of the pause is
controlled not only by the preceding ratio, as Ferster and Skinner and others had assumed (and
as implied by the phrase postreinforcement pause), but by the upcoming ratio as well. Similarly,
despite the commonly held belief that ratio pausing is unique to the fixed-ratio schedule, there is
evidence that pausing also occurs under variable-ratio schedules. If such widely held beliefs are
incorrect, then what about other assumptions? This article selectively examines the literature on
pausing under ratio schedules over the past 50 years and concludes that although there may
indeed be some common patterns, there are also inconsistencies that await future resolution.
Several accounts of pausing under ratio schedules are discussed along with the implications of
the literature for human performances, most notably the behaviors termed procrastination.

Key words: fixed-ratio schedule, variable-ratio schedule, postreinforcement pause, preratio
pause, animal models, procrastination

2007 marked the 50th anniversary
of the publication of Charles Ferster
and B. F. Skinner’s magnum opus,
Schedules of Reinforcement (1957),
which reported the results of experi-
ments carried out under contracts to
the Office of Naval Research with
Harvard University between 1949
and 1955. Although Skinner had
previously discussed periodic recon-

ditioning (what was later to become
the fixed-interval [FI] schedule) and
fixed-ratio (FR) reinforcement (he
hadn’t yet used the term reinforce-
ment schedule) in The Behavior of
Organisms (1938), it wasn’t until the
publication of Schedules of Reinforce-
ment that he (and Ferster, working as
a research fellow under Skinner’s
direction) distinguished several sim-
ple and complex schedules based on
nonhuman (pigeons and rats) perfor-
mances. In addition, they investigat-
ed the effects of many different types
of those schedules along with a
variety of other variables (e.g., drugs,
deprivation level, ablation of brain
tissue, added counters, etc.) that were
described in terms of rate of response
and depicted on cumulative records.
Their research documented the power
of schedules of reinforcement to

Correspondence concerning this article can be
addressed to Hank Schlinger, Adam Derenne,
or Alan Baron, Department of Psychology,
California State University, Los Angeles, 5151
State University Dr., Los Angeles, California
90032 (e-mail: hschlin@calstatela.edu); Adam
Derenne, Department of Psychology, Box 8380,
University of North Dakota, Grand Forks,
North Dakota 58202 (e-mail: adam.derenne@
und.nodak.edu) or Alan Baron, Department of
Psychology, University of Wisconsin–Milwau-
kee, Milwaukee, Wisconsin 53201 (e-mail: ab@
uwm.edu)

The Behavior Analyst 2008, 31,

39

–60 No. 1 (Spring)

39

control behavior and established the
study of schedules as a focus within
the experimental analysis of behavior.
Ferster and Skinner’s research paint-
ed a detailed picture of the perfor-
mances of the subjects as seen on a
response-by-response basis. Howev-
er, their wide-ranging effort did not
attempt to provide systematic infor-
mation about the effects of paramet-
ric variations across conditions and
subjects. It remained for subsequent
researchers to fill in the gaps.

Fifty years and innumerable exper-
iments later, it is common for text-
books on learning to illustrate FR
performances with cumulative records
and to describe the resulting patterns
as ‘‘break and run’’ (see Figure 1). As
explained by Lattal (1991), ‘‘following
a period of nonresponding (a break,
more precisely the postreinforcement
pause) after food delivery, there is a
relatively quick transition to a high
steady rate of responding (a run) that
is maintained until the next food
presentation when the pattern re-
peats’’ (p. 95). FR performances are
often contrasted with those under
variable-ratio (VR) schedules, in
which the size of the ratios varies
within the schedule. Under VR sched-
ules, performances are ‘‘characterized
by high response rates with little
systematic pausing either after rein-
forcement or at other times’’ (p. 95).1

Among the myriad response pat-
terns observed under various sched-
ules of reinforcement, pausing under
ratio schedules (especially FR sched-
ules) has attracted special attention.
Unlike the pause that follows rein-
forcement on interval schedules, the
pause that follows reinforcement on

ratio schedules reduces overall rein-
forcement rates. Because reinforce-
ment rates under ratio schedules
depend strictly on response rates,
optimal performance would be for
subjects to resume responding imme-
diately. Yet they pause, and the
resultant loss of reinforcement persists
over extended exposure to the sched-
ule without diminution. The critical
question, then, is: Why would an
animal pause when that very action
delays reinforcement and reduces
overall reinforcement rate? One possi-
bility is that the animal is fatigued
after working so hard. For example,
pauses are generally shorter under FR
10 than under FR 100. Because the FR
100 involves more work, it is possible
that subjects rest longer before resum-
ing work. Another possibility is that
food consumption creates satiation,
which weakens the motivational oper-
ation of food deprivation. However,
we will see that neither of these
seemingly reasonable explanations
survive simple tests and that the
picture is much more complicated.

Ratio pausing within the laborato-
ry is also of special interest because it
resembles the human problem of
procrastination: In both cases, an
action is put off even though the
resulting delay may be disadvanta-
geous. Social commentators have
noted that procrastination is a major
contributor to behavioral inefficiency
in schools, industry, and our daily
lives in general (Steel, 2007). Identi-
fication of the variables that control
the ratio pause in the laboratory may
help to reveal techniques for the
modification of procrastination.

Our primary purpose in this article
is to identify the variables that govern
pausing under ratio schedules. Re-
search on pausing under ratio sched-
ules has come a long way since
Ferster and Skinner’s (1957) pioneer-
ing contributions. They set us on our
way, but in the ensuing 50 years
increasingly sophisticated questions
and experimental designs have re-
vealed a more complex picture. A

1 A pause is usually defined as the time from
the delivery of a reinforcer until emission of
the first response of the subsequent ratio and
has been variously referred to as a postrein-
forcement or preratio pause. Although there is
evidence that pauses will sometimes occur
after the first response (Ferster & Skinner,
1957; Griffiths & Thompson, 1973; Mazur &
Hyslop, 1982), we use the terms pause and
pausing to refer to the postreinforcement or
preratio pause.

40 HENRY D. SCHLINGER et al.

fundamental issue in research on
ratio schedules, then, concerns how
pausing should be summarized and
described. In the present article we
review the research on pausing under
ratio schedules of reinforcement in an
attempt to glean an understanding of
why pausing under such schedules
occurs. We first take a look at
research on pausing with simple FR
schedules, then consider research
with complex (multiple and mixed)
schedules with FR components, and,
finally, look at pausing under VR
schedules. We then discuss the real-
world implications of ratio pausing,
in particular, the bearing of labora-
tory research on the pervasive social
problem of human procrastination.
Finally, we offer a set of conclusions
that might shed light on the theoret-

ical mechanisms that govern pausing
under ratio schedules.

PAUSING UNDER
FR SCHEDULES

Skinner (1938) first described the
performances of rats under FR and
FI reinforcement as including a pause
that was under the stimulus control
of the previous reinforcer:

In both types of experiment the discrimination
from the preceding reinforcement is active,
since one reinforcement never occurs immedi-
ately after another. A reinforcement therefore
acts as an SD in both cases. As the result of this
discrimination the rat stops responding for a
short period just after receiving and ingesting
a pellet of food. (p. 288)

Thus, from the very beginning of his
research on FR schedules, Skinner

Figure 1. Cumulative records showing performances under FR 90 and VR 50 schedules of
reinforcement (Ferster & Skinner, 1957). Reproduced from Figure 2 (Lattal, 1991) and reprinted
with permission of Elsevier Science Publishers.

PAUSING UNDER RATIO SCHEDULES 41

assumed that the pause was a func-
tion of the preceding reinforcer.
Consequently, the phrase postrein-
forcement pause came to connote
not only the pause after reinforce-
ment but also the pause controlled by
reinforcement (Griffiths & Thomp-
son, 1973).

Skinner did acknowledge that
there were ‘‘considerable individual
differences’’ in the length of pausing
produced by FR schedules. For
example, in describing performance
under an FR 192, he wrote,

At one extreme the pause after ingestion may be
relatively great and the subsequent acceleration
to a maximal or near maximal rate very rapid.
… At the other extreme the pause is brief, but
the rate immediately following it is low and
accelerated slowly. (1938, pp. 289–290)

Recognition of such irregularities
also can be found in Morse and
Dews’ (2002) foreword to the reprint-
ing of Schedules of Reinforcement:

If one leafs through the pages of any chapter,
there are clearly differences in the uniformity
and reproducibility of performances under a
particular type of schedule. Some of these
differences in performances come from the
continuing technical improvements in the
design of keys and feeders, others from
differences in the past experiences of subjects
before exposure to the current contingencies
or from the duration of exposure to current
conditions, and sometimes from differences
between subjects treated alike. (p. 315)

Inspection of the cumulative records
reproduced in Ferster and Skinner’s
compendium confirms that despite
impressive commonalities in perfor-
mances, there also are unaccounted-
for differences both within the per-
formances of the same subject and
between those of different subjects.
These two facets of Skinner’s re-
search—regularities in performance
sometimes accompanied by unac-
countable variation—are not neces-
sarily at odds. Individual differences
should serve as a prod for identifying
the variables that control the differ-
ences, thus strengthening conclusions
about the commonalities.

Building on Skinner’s (1938) and
Ferster and Skinner’s (1957) findings,
subsequent research has identified a
number of variables that affect the
length of the FR pause. These include
the size of the ratio (e.g., Felton &
Lyon, 1966; Powell, 1968), the amount
of response effort (Alling & Poling,
1995; Wade-Galuska, Perone, &
Wirth, 2005), the magnitude of the
reinforcer (e.g., Lowe, Davey, & Har-
zem, 1974; Perone & Courtney, 1992;
Powell, 1969), the probability of rein-
forcement (Crossman, 1968; McMil-
lan, 1971), and the level of deprivation
(Malott, 1966; Sidman & Stebbins,
1954). One could say that, in general,
the duration of the pause increases as a
function of variables that weaken
responding. These include increases
in ratio size and response effort and
decreases in reinforcer magnitude,
reinforcement probability, and degree
of deprivation. Although all of these
variables likely interact in complex
ways, for present purposes we restrict
our discussion to the two most re-
searched: ratio size and reinforcement
magnitude.

Effects of FR Size on Pausing

In their chapter on FR schedules,
Ferster and Skinner (1957) presented
cumulative records of pigeons transi-
tioning from FR 1 or low FR
schedules to higher FR schedules, as
well as cumulative records of final
performances on FR 120 and FR 200
schedules. Although there is some
within-subject and between-subjects
variability, the cumulative records
show that performances by pigeons
on ratios as high as FR 60 are
characterized by brief pauses. The
final performances of 2 birds on
higher FR schedules (FR 120 and
FR 200) reveal some longer pauses
(although at these ratios the variabil-
ity is even greater), prompting the
conclusion that pause length increas-
es with FR size.

Subsequent experiments provided
support for the finding that long

42 HENRY D. SCHLINGER et al.

pauses become more frequent with
increases in FR size. In the first
parametric study of FR pausing,
Felton and Lyon (1966) exposed
pigeons to schedules ranging from
FR 25 to FR 150. Their results
showed that the mean pause duration
increased systematically as a function
of FR size. A cumulative record from
1 bird reveals consistently brief paus-
es at FR 50 very much like the
performance of Ferster and Skinner’s
pigeons at similar FR sizes. By
comparison, the record for FR 150
shows some longer pauses but many
short ones as well.

Felton and Lyon’s (1966) results
were replicated by Powell (1968)
who, beginning with FR 10, moved
pigeons through small sequential
changes in FR size up to FR 160
and then back down to FR 10. His
results showed that mean pausing
increased as an accelerating function
of FR size. Powell provided a de-
tailed analysis of this effect by
including a frequency distribution of
individual pauses that showed that as
ratio size increased the dispersion of
pauses from shorter to longer also
increased. Nevertheless, his data
showed a relatively greater number
of shorter pauses, even at FR 160.
These data supply additional evi-
dence that as FR size increases,
longer pauses become more frequent.
But they also demonstrate that short-
er pauses still predominate, a fact
that Ferster and Skinner also noted.
Taken together, Felton and Lyon’s
and Powell’s results confirm the
standard description of performance
under ratio schedules as break and
run, even though Powell’s distribu-
tion data and Felton and Lyon’s
cumulative records show that the
majority of pauses were unaffected
by increases in ratio size.

One problem with the Felton and
Lyon (1966) and Powell (1968) ex-
periments was that the main findings
were summarized as functions show-
ing the relation between ratio size
and the mean pause, which, in

retrospect, we know can obscure
important variations in performance
and, thus, tell only part of the story
of pausing under FR schedules (Bar-
on & Herpolsheimer, 1999).

Effects of Reinforcement Magnitude
on Pausing

Ferster and Skinner (1957) did not
study the effects of reinforcement
magnitude on pausing per se, al-
though they did report that pausing
increased when the magazine hopper
became partially blocked. Subse-
quent studies revealed that variations
in reinforcement magnitude can have
important effects on FR pausing. The
results, however, have been conflict-
ing. Powell (1969), in what perhaps
was the first systematic study of
magnitude, varied pigeons’ access to
grain using two durations (either 2.5
or 4 s), each correlated with a differ-
ent colored key light. In different
phases, FR values ranged from 40 to
70. His results showed an inverse
relation between reinforcement mag-
nitude and pausing. Longer pauses
occurred with the shorter food dura-
tion, especially at the higher ratios,
indicating that magnitude and ratio
size interact to determine the extent of
pausing. Interestingly, Powell sug-
gested that pause durations could be
controlled or stabilized at different
FR requirements by manipulating
reinforcement magnitudes. In es-
sence, Powell’s findings indicated that
increasing the magnitude of reinforce-
ment could mitigate the effects of
increased ratio size on pausing.

Lowe et al. (1974) obtained differ-
ent results in an experiment in which
rats, performing on an FR 30 sched-
ule, were exposed to a sweetened
condensed milk solution mixed with
water that varied in concentration
from 10% to 70%. Contrary to
Powell’s (1969) report, Lowe et al.
found that the length of the pause
was a direct function of the magni-
tude of reinforcement, and they
attributed the results to an uncondi-

PAUSING UNDER RATIO SCHEDULES 43

tioned inhibitory aftereffect of rein-
forcement. There are a number of
procedural differences between the
studies by Powell (1969) and Lowe et
al. (1974) that might account for the
opposite results, including the species
of the subject (pigeons vs. rats), the
definition of magnitude (duration of
access to grain vs. milk concentra-
tion), the presence of discriminative
stimuli (multiple schedule vs. simple
schedule), and the procedure of
varying the concentration from one
delivery to the next within the same
session (within- vs. between-session
procedures). It is also noteworthy
that the literature on the effects of
magnitude on pausing under FR
schedules has included carefully done
studies in which magnitude effects
were absent (e.g., Harzem, Lowe, &
Davey, 1975; Perone, Perone, &
Baron, 1987).

In a discussion of these and other
studies, Perone et al. (1987; see also
Perone & Courtney, 1992) proposed
an account that relies on an interac-
tion between both inhibitory and
excitatory control. Specifically, ratio
performance is assumed to reflect
unconditioned inhibition from the
previous reinforcer and excitation
from stimuli correlated with the
upcoming reinforcer. In Powell’s
(1969) study, in which increased
magnitude reduced pausing, the ex-
citatory stimuli were dominant be-
cause the multiple schedule provided
cues that were correlated with the
magnitude of the upcoming reinforc-
er. By comparison, in the Lowe et al.
(1974) study, such stimuli were ab-
sent because the concentration level
varied in an unpredictable manner,
and the dominant influence was from
the aftereffect of the prior reinforcer.

Pausing Under Progressive-Ratio
Schedules

Our discussion of ratio pausing has
centered on simple FR schedules (i.e.,
a procedure in which the schedule
parameters are unchanged within

sessions). A variant is the progres-
sive-ratio (PR) schedule of reinforce-
ment, a schedule in which the ratio
increases in a series of steps during
the course of the session until the
ratio becomes so high that the subject
stops responding (the so-called
breaking point). The PR schedule
was originally developed by Hodos
(1960) as a way of assessing the value
of a reinforcer using the breaking-
point measure. However, the cumu-
lative records provided in some re-
ports (see Hodos & Kalman, 1963;
Thomas, 1974) showed that pauses
increased in duration as the ratio size
increased, a finding that bears obvi-
ous similarities to the relation be-
tween FR size and pausing.

More recently, quantitative analy-
ses of PR pausing have shown that
the rate of increase from ratio to ratio
can be described as an exponential
function of ratio size throughout
most of the range of ratios (Baron
& Derenne, 2000; Baron, Mikorski,
& Schlund, 1992). A similar relation
can be discerned in FR pausing as a
function of ratio size (e.g., Felton &
Lyon, 1966). However, unlike FR
schedules, the PR function steepens
markedly during the last few ratios
prior to the final breaking point, a
finding perhaps not unexpected, giv-
en that PR performances ultimately
result in extinction.

Along with ratio size, reinforce-
ment magnitude plays a parallel role
in PR and FR schedules. Baron et al.
(1992) found that the slope of the PR
pause-ratio functions decreased with
increases in the concentration of
sweetened milk reinforcement, thus
signifying lesser degrees of pausing.
These results show that the PR
schedule provides a convenient way
to study variables that influence ratio
pausing. A complicating factor, how-
ever, is that the contingencies embed-
ded in the progression of increasing
ratio sizes may also influence per-
formances, particularly at the high-
est ratios (cf. Baron & Derenne,
2000).

44 HENRY D. SCHLINGER et al.

Pausing Within Multiple FR
FR Schedules

Identifying the variables that con-
trol pausing on simple FR schedules
is difficult because such schedules
hold fixed characteristics of the ratio,
such as size and reinforcement mag-
nitude. Under such schedules, the
obvious place to look for the vari-
ables that control pausing is the just-
completed ratio, as originally sug-
gested by Skinner and subsequent
researchers. Perhaps, as mentioned
previously, the subject is pausing to
rest from its labors, or perhaps the
just-delivered food has reduced its
deprivation level. The interpretive
problem posed by simple-schedule
data is that pausing could just as
easily be attributed to the upcoming
ratio as to the preceding ratio. The
solution to this problem is to use
multiple schedules with FRs in both
components.

As explained by Ferster and Skin-
ner (1957), multiple schedules are
compound schedules in which two
or more simple schedules of rein-
forcement, each correlated with a
different stimulus, are alternated.
Ferster and Skinner presented data
on multiple FR FR schedule perfor-
mances (e.g., FR 30 FR 190). How-
ever, these data did not clarify the
nature of control over pausing be-
cause each of the two components
was in effect for entire sessions,
making it difficult to assess the
contrast between the components.
What was needed was a method with
two (or more) FR components so
that the characteristics of the two
ratios could be varied independently.
This essentially was the strategy
followed by subsequent researchers
(e.g., Baron & Herpolsheimer, 1999;
Griffiths & Thompson, 1973).

For example, Griffiths and Thomp-
son (1973) observed rats responding
under several two-component multi-
ple FR schedules (FR 20 FR 40, FR
30 FR 60, and FR 60 FR 120) that
were programmed in semirandom

sequences. Mixed schedules (i.e.,
schedules without correlated stimuli)
provided control data. The results
showed that prolonged pausing sys-
tematically occurred before the high
ratios on the multiple schedules but
not on the mixed schedules.

Griffiths and Thompson (1973)
noted that in mixed or multiple
schedules with unpredictable alterna-
tion between two ratio components
of different lengths, there are four
possible occasions for pausing: after
a high ratio and before a high ratio
(high-high); after a low ratio and
before a high ratio (low-high); after a
high ratio and before a low ratio
(high-low); and after a low ratio and
before a low ratio (low-low). The
researchers presented data on the
relative frequency distributions of
pause durations for all four possible
ratio combinations. The results showed
that the longest pauses (30 s or greater)
occurred most frequently before high
ratios on the multiple schedules but
not the mixed schedules. Moreover,
in the multiple schedules, the fre-
quency of longest pauses was greater
before the low-high transition than
before the high-high transition, sug-
gesting to Griffiths and Thompson
that ‘‘pausing in ratio schedules is
largely a function of the relative size
[italics added] of the upcoming
ratio’’ (p. 234). The implication is
that the preceding ratio and stimuli
correlated with the upcoming ratio
interacted to determine the length of
the pause. These results led the
researchers to suggest that although
the term postreinforcement pause
could still be used descriptively to
refer to the pause after reinforce-
ment, a more functionally appropri-
ate term might be preratio pause, or
the even more neutral between-ratio
pause.

Baron and Herpolsheimer (1999)
replicated many features of the study
by Griffiths and Thompson (1973),
but they also uncovered a critical
analytic problem. On average, pauses
increased in duration as the ratio

PAUSING UNDER RATIO SCHEDULES 45

increased, and pausing was more a
function of the upcoming ratio than
the preceding ratio. However, the
authors addressed a feature of the
results that Griffiths and Thompson
had not acknowledged: The distribu-
tions of individual pauses were pos-
itively skewed, and changes in aver-
age performance were due more to
increased skew than to shifts of the
entire distribution. In other words,
more pauses tended to be of relatively
short duration than relatively long
duration, and this occurred even
when relatively high ratios (e.g.,
150) were employed.

Discrepancies between the measure
of central tendencies and the individ-
ual values within the distribution
raise a fundamental issue in the study
of pausing: Should the individual
pause serve as the unit of analysis
or should the distribution of pauses
be aggregated into a single value?
Behavior analysts have traditionally
viewed the practice of aggregating
data from different subjects with
suspicion because the average from
a group of individuals may not
provide a satisfactory picture of any
member of the group (Sidman, 1960).
By comparison, methods that char-
acterize an individual’s performance
through an average of data from that
individual are commonly regarded as
acceptable. However, the results from
studies on pausing suggest that an
average based on within-subject ag-
gregation may not provide a satisfac-
tory picture of individual perfor-
mances either. In particular, the use
of the mean may exaggerate the
degree of pausing and conceal the
fact that often the subject pauses very
little, if at all. These questions about
ways of aggregating individual re-
sponses underscore an important
issue raised previously: Although
mean pausing increases as a function
of ratio size, a significant number of
ratios are accompanied by brief
pauses. The origin of this mix be-
tween long and short pauses remains
to be discovered.

As noted earlier, pausing under
multiple FR FR schedules depends
on the transition from the preceding
to the upcoming reinforcement. The
research design exemplified by Grif-
fiths and Thompson (1973) originally
was directed toward pausing as a
function of consecutive ratios that
differed in size. Subsequently, Perone
and his students investigated transi-
tions from ratios that varied in terms
of reinforcer magnitude (Galuska,
Wade-Galuska, Woods, & Winger,
2007; Perone, 2003; Perone & Court-
ney, 1992) and response effort
(Wade-Galuska et al., 2005). As is
the case when different-sized FR
schedules were contrasted, pausing
was longest following a transition
from a large to a small reinforcer
or, in the case of response effort, a
transition from a low to a high
response-force requirement. By com-
parison, pauses were relatively short
when the transition was from a small
to a large magnitude or from a high
to a low response force. Taken as a
whole, these findings suggest that
pausing is affected by the relative
‘‘favorableness’’ of the situation, with
pauses becoming most pronounced
when the transition is from a more to
a less favorable contingency (Wade-
Galuska et al., 2005).

We noted that pauses tend to be
longest when the upcoming ratio is
higher, requires more effort, or pro-
duces smaller reinforcement magni-
tudes, but that pausing may also be
influenced by features of the previous
ratio. Harzem and Harzem (1981)
have summarized the view that paus-
ing is due to an unconditioned
inhibitory aftereffect of reinforce-
ment. Such an effect should dissipate
with the passage of time, depending
on the magnitude of the just-com-
pleted reinforcer. One manipulation
that can clarify the extent of the
inhibitory effect is to include a period
of time-out between the delivery of
the reinforcer and the start of the
next ratio. Several studies have con-
firmed that pausing is indeed reduced

46 HENRY D. SCHLINGER et al.

when such time-out periods are
interposed on FR schedules (Mazur
& Hyslop, 1982; Perone et al., 1987)
and PR schedules (Baron et al.,
1992).

Pausing Within Mixed FR
FR Schedules

Multiple FR FR schedules provide
insight into the degree to which
pausing is controlled by upcoming,
as opposed to preceding, contingen-
cies. Because each component is
correlated with a distinct stimulus,
the discriminative control exerted by
the upcoming ratio can be manipu-
lated within a session. By compari-
son, mixed FR FR schedules, that is,
schedules in which the FR contin-
gencies vary within the session, lack
discriminative stimuli associated with
upcoming ratios that might exert
control over responding. Conse-
quently, control of pausing appears
to be limited to characteristics of the
previous ratio. When Ferster and
Skinner (1957) compared mixed FR
FR schedules with markedly different
ratio sizes (e.g., FR 30 and FR 190),
they found that pausing occurred
within the ratio rather than during
the transition from one ratio size to
the next. Specifically, during FR 190,
pausing occurred within the ratio
after approximately 30 responses
(the size of the lower ratio). Ferster
and Skinner called this within-ratio
pausing ‘‘priming,’’ stating that, ‘‘the
emission of approximately the num-
ber of responses in the smaller ratio
‘primes’ a pause appropriate to the
larger ratio’’ (p. 580).

Several subsequent experiments
have provided quantitative analyses
of priming (e.g., Crossman & Silver-
man, 1973; Thompson, 1964). For
example, Crossman and Silverman
presented detailed data using proce-
dures that varied the proportion of
FR 10 to FR 100 components within
a mixed schedule. When most of the
ratios were FR 10, pausing occurred
chiefly after reinforcement (a finding

consistent with simple FR ratios).
However, priming emerged with in-
creases in the proportion of high
ratios. Once again, subjects paused
within the higher ratio after emission
of the approximate number of re-
sponses in the lower ratio. Taken as a
whole, the phenomenon of priming
within mixed schedules points to the
subject’s high degree of sensitivity to
the size of the FR. When the
upcoming ratio is higher than the
preceding one, and this difference is
signaled by correlated stimuli (multi-
ple schedule), pausing after reinforce-
ment is the rule. When, however,
experimenter-controlled stimuli are
not available (mixed schedule), a
pause commensurate with the lower
ratio appears within the higher one.
It appears, then, that animals can
discriminate the size of the ratio
based on their own responding
(Thompson).

PAUSING UNDER
VR SCHEDULES

Having completed our review of
FR schedules (and FR variants), we
now turn to a consideration of their
VR counterparts. Ferster and Skin-
ner (1957) assumed that pausing
under FR schedules occurs because
the reinforcer delivered at the end of
one ratio is also an SD for responding
at the beginning of the next ratio.
Because the size of the upcoming
ratio under VR schedules is typically
unpredictable and sometimes quite
low, the SD effects of the reinforcer
are weak. As a result, pausing under
VR schedules should be minimal.

In line with this interpretation,
most of Ferster and Skinner’s cumu-
lative records of VR performances
reveal uniformly high rates with few
irregularities. For example, with typ-
ical birds, although response rates
occasionally varied, pausing was
clearly absent at moderate ratios
(e.g., VR 40 or 50). Even on relatively
high VR schedules (e.g., VR 360),
pauses were short. Perhaps cumula-

PAUSING UNDER RATIO SCHEDULES 47

tive records such as these, as well as
Ferster and Skinner’s description of
VR performance in general, led to the
widespread belief that pauses under
VR schedules are either very short or
nonexistent. As we have already seen
with FR schedules, however, research
has revealed a more complex picture
than that presented by Ferster and
Skinner. Likewise, pausing under VR
schedules is also more complex than
Ferster and Skinner assumed.

Effects of Ratio Size and
Reinforcement Magnitude in
VR Schedules

Although studies on pausing under
FR schedules are numerous, relative-
ly few studies have been concerned
with the variables that control paus-
ing under VR schedules. With regard
to the role of ratio size, Crossman,
Bonem, and Phelps (1987) confirmed
Ferster and Skinner’s findings in a
study in which pigeons responded
under simple FR and VR schedules
with ratio sizes ranging from 5 to 80.
Both schedules yielded brief pauses at
low-to-moderate ratios. However,
under the FR 80 schedule pausing
was relatively long, whereas under
the VR 80 schedule pausing was
relatively short.

Other research has opened to
question the view that significant
pausing is absent under VR sched-
ules. In a study with rats, Priddle-
Higson, Lowe, and Harzem (1976)
varied the mean ratio size (e.g., VR
10, VR 40, VR 80) across sessions
and the magnitude of the reinforcer
(concentration of sweetened milk)
within sessions using a mixed sched-
ule. Results showed that mean pause
durations increased as a function of
VR size. Moreover, the longest paus-
es occurred whenever the highest
concentrations were employed, indi-
cating that ratio size and reinforce-
ment magnitude interacted. The find-
ing of long pauses with a high-
concentration reinforcer (means ranged
from approximately 18 s to 43 s for

individual subjects) echoes the finding
described previously by Lowe et al.
(1974) with mixed FR schedules. The
implication is not just that marked VR
pausing is possible but that VR and FR
pausing are controlled by similar vari-
ables.

Blakely and Schlinger (1988) also
examined interactions between mean
ratio size and reinforcer magnitude
under VR schedules. Pigeons re-
sponded on multiple VR VR sched-
ules ranging from VR 10 to VR 70 in
which access to food was available
for 2 s in one component and 8 s in
the other. Results showed that paus-
ing increased with ratio size at both
magnitudes. However, the effect was
considerably more marked in the
component with the smaller of the
two magnitudes, a finding exactly
opposite to those of Priddle-Higson
et al. (1976). Applying the analysis of
excitation and inhibition by Perone et
al. (1987), this difference is expected
because the multiple-schedule proce-
dure of Blakely and Schlinger exerted
excitatory control by the stimuli
correlated with the upcoming magni-
tude size.

A remaining issue concerns why
some experiments have shown little
or no VR pausing (e.g., Crossman et
al., 1987; Ferster & Skinner, 1957),
whereas others have (e.g., Blakely &
Schlinger, 1988; Priddle-Higson et
al., 1976; Schlinger, Blakely, & Kac-
zor, 1990). Schlinger and colleagues
have shown that the chief cause is
probably the size of the lowest ratio
within the distribution of individual
ratios that comprise the VR schedule.
When Ferster and Skinner studied
VR schedules, the lowest ratio was
‘‘usually 1’’ (p. 391). For example, on
a VR 360 schedule, the actual pro-
gression of ratios ranged from 1 to
720. Other researchers who have
reported minimal VR pausing have
followed suit (e.g., Crossman et al.).
When Schlinger et al. set the lowest
ratio to 1 they also found that
minimal pausing occurred and that
other manipulations in the schedule

48 HENRY D. SCHLINGER et al.

(e.g., mean ratio size and the magni-
tude of the reinforcer) had little effect
on performances. However, when the
lowest ratio was higher (additional
values were 4, 7, or 10 for the lowest
ratio), longer pausing occurred, and
other manipulations were observed to
have an effect that paralleled the
results with FR schedules. Thus, a
procedural artifact—incorporating 1
as the lowest ratio in the VR distri-
bution, found in much of the early
research on pausing under VR sched-
ules—is the likely reason why re-
searchers and textbook authors assert
that pausing is absent under VR
schedules.

PAUSING ON THE
HUMAN LEVEL

Ferster and Skinner’s (1957) re-
search, together with the various
experiments that followed, has yield-
ed a wealth of data pertaining to
performances under ratio schedules.
This literature has produced a set of
empirical principles that provides a
framework for control by reinforce-
ment schedules. Equally important is
that the study of schedules has
provided the basis for an understand-
ing of human affairs as exemplified
by Skinner’s classic book, Science and
Human Behavior (1953). However, a
conspicuous feature of the literature
we have reviewed so far is that the
research comes only from the animal
laboratory. We now turn to the
question of the applicability of such
findings to humans.

Although the value of animal
models is sometimes challenged as a
way to understand human behavior,
this approach has met with consider-
able success within the behavioral and
biological sciences. Skinner (1953)
provided a vigorous defense of animal
models in the following terms:

Human behavior is distinguished by its
complexity, its variety, and its greater accom-
plishments, but the basic processes are not
necessarily different. Science advances from

the simple to the complex; it is constantly
concerned with whether the processes and
laws discovered are adequate for the next. It
would be rash to assert at this point that there
is no essential difference between human
behavior and the behavior of the lower
species; but until an attempt has been made
to deal with both in the same terms, it would
be equally rash to assert that there is. A
discussion of human embryology makes con-
siderable use of research on the embryos of
chicks. Treatises on digestion, respiration,
circulation, endocrine secretion, and other
physiological processes deal with rats, ham-
sters, rabbits, and so on, even though the
interest is primarily in human beings. The
study of behavior has much to gain from the
same practice. (p. 38)

Skinner’s views, as well as those of
such pioneering figures as Pavlov,
Thorndike, and Watson, were shaped
by the Darwinian assumption of
biological continuity between species.

Behavioral researchers study non-
human animals because they are
more suited for experimental re-
search. Researchers are reluctant to
expose humans to the extreme forms
of control and the extended study
required for the steady-state method
needed for experimentation. In addi-
tion, the complex histories brought
by humans into the laboratory inter-
act with the conditions under inves-
tigation. Of course, the experimental
study of nonhuman subjects does not
necessarily guarantee correct conclu-
sions about human behavior. The
development of comprehensive be-
havioral principles requires a bal-
anced approach. Thus, detailed ex-
perimental data from the animal
laboratory, although clearly impor-
tant, must be combined with knowl-
edge from the study of human
behavior. To understand how the
data on pausing with nonhuman
subjects may relate to human behav-
ior, we first consider results from
experiments with human subjects;
this is followed by a discussion of
how a behavioral interpretation of
the human problem of procrastina-
tion might proceed. We conclude
with a brief consideration of how
variables manipulated in experiments

PAUSING UNDER RATIO SCHEDULES 49

with nonhuman animals might be
used to reduce human procrastina-
tion.

Experiments with Human Subjects

The most direct source of informa-
tion about ratio pausing in humans
comes from experiments with human
subjects. The logic is the same as in
the biomedical sciences, when an
intermediate step is inserted between
research with animal subjects and
clinical practice. For example, the
effectiveness of a drug developed in
the animal laboratory is verified
through systematic experiments with
human volunteers. Only when the
drug has been found to pass muster
on the human level is it introduced
within medical practice. In similar
terms, behavioral questions can be
addressed in the human laboratory to
clarify a range of issues. But it is also
the case that experimental research
with human subjects, both behavioral
and biological, poses ethical and
methodological problems. However,
within these boundaries, animal-
based principles can be examined in
experiments with humans, and the
results can both buttress behavioral
interpretations and inform behavior-
al interventions.

Ratio schedules of reinforcement
appear in several basic and applied
experiments with humans. Some-
times, ratio schedules are used as a
tool to study some other process, and
the particulars of ratio schedule
performances are not reported. Even
when ratio schedule performances are
described, response rates or pausing
may be presented in minimal detail.
A further complication pertains to
the results reported in research on
ratio schedules in humans. Following
the lead of Ferster and Skinner, most
often the data have been in the form
of selected cumulative records. In
those cases in which the results have
not been analyzed in quantitative
terms, there often is room for differ-
ent, sometimes conflicting, interpre-

tations. Moreover, there are note-
worthy instances in which a given
researcher’s conclusions differed from
those reported by other researchers.

These difficulties notwithstanding,
the available evidence suggests that
human subjects do pause under FR
schedules and pause very little under
VR schedules. The exact degree of
pausing varies substantially, depend-
ing on idiosyncratic features of the
experimental procedure. At one ex-
treme are studies in which FR
pausing appears to be entirely absent
(e.g., Sanders, 1969; Weiner, 1964).
At the other are studies in which the
results obtained with humans neatly
mirror those found in the animal
laboratory (e.g., R. F. Wallace &
Mulder, 1973; D. C. Williams, Saun-
ders, & Perone, in press). The re-
maining studies reveal a range of
intermediate performances, depend-
ing on the degree of regularity in the
pause–run pattern.

We located 13 experiments specif-
ically concerned with human perfor-
mance under simple FR schedules. In
addition to experiments with normal-
ly functioning adults (Holland, 1958;
Sanders, 1969; Weiner, 1964, 1966),
studies include those with infants
(Hillman & Bruner, 1972), young
children (Long, Hammack, May, &
Campbell, 1958; Weisberg & Fink,
1966; Zeiler & Kelley, 1969), mentally
retarded persons (Ellis, Barnett, &
Pryer, 1960; Orlando & Bijou, 1960;
R. F. Wallace & Mulder, 1973; D. C.
Williams et al., in press), and schizo-
phrenics (Hutchinson & Azrin, 1961).
Perhaps it is not surprising that
individuals with a wide range of
histories and other personal charac-
teristics have produced such varied
outcomes. Aside from subject char-
acteristics, differences include preex-
perimental instructions (usually these
are not explicitly specified), the type
of reinforcer (candy, trinkets, points,
etc.), and the characteristics of the
manipulandum (e.g., telegraph key,
plunger, touch screen, etc.). Un-
doubtedly, developmental level and

50 HENRY D. SCHLINGER et al.

verbal capability also play important
roles.

In light of this variation, it is a
formidable task to establish corre-
spondences between specific charac-
teristics of the experimental proce-
dure and the degree of ratio pausing.
The scarcity of human experiments
that have addressed basic schedule
parameters makes it difficult to draw
definitive conclusions. Nonetheless,
two promising leads are well worth
mentioning. In an early study, R. F.
Wallace and Mulder (1973) demon-
strated that pause duration increased
and decreased systematically accord-
ing to an ascending and descending
series of FR sizes. More recently, D.
C. Williams et al. (in press), using
multiple FR FR schedules (cf. Perone
& Courtney, 1992), found that the
extent of pausing was maximal when
a large-magnitude reinforcer was
followed by a small one. Studies such
as these can serve as a model of
effective research on the human level:
replication, steady-state methods, and
systematic variation of the control-
ling variables.

Behavioristic Interpretation

In addition to experiments with
human subjects, human behavior
may be informed by research with
nonhumans through behavioral in-
terpretation, that is, correspondences
between naturally occurring human
behaviors and contingencies studied
in the animal laboratory. More sim-
ply, researchers interpret complex
human behaviors with principles de-
rived from nonhuman laboratory
investigations. For example, to illus-
trate FR response patterns, Mazur
(2006) described his own observa-
tions when he was a student doing
summer work in a hinge-making
factory. He reported that the workers
were paid on the basis of completion
of 100 hinges (a piecework system),
and that ‘‘once a worker started up
the machine, he almost always
worked steadily and rapidly until

the counter on the machine indicated
that 100 pieces had been made. At
this point, the worker would record
the number completed on a work
card and then take a break’’ (p. 147).
In other words, the workers exhibited
the familiar break-and-run pattern
associated with FR schedules. Al-
though such interpretations of oper-
ant performances have an important
place within science, by the usual
standards they cannot be regarded as
definitive. One cannot be certain
about the role of the observer’s
expectation or whether the behavior
sample is representative. Also, other
theoretical systems may generate
equally plausible interpretations of
the same behavior.

No doubt, behavioral interpreta-
tions pose numerous unresolved
questions. An essential first step is
to develop ways of systematically
recording human performances as
they occur in the natural environ-
ment. A simple, but instructive, study
was reported by the famous novelist
Irving Wallace, who kept detailed
charts of his own writing output (I.
Wallace, 1977). This information,
when expressed in the form of
cumulative records, showed that
‘‘the completion of a chapter always
coincided with the termination of
writing for the day on which the
chapter was completed’’ (p. 521).
In other words, Wallace’s records,
which were based on the writing of
several novels, resembled the FR
break-and-run pattern. More broad-
ly, field studies of behaviors in a
variety of settings (e.g., the factory or
the classroom) can provide detailed
information that can interrelate de-
scriptive observations with the results
of experimental research (cf. Bijou,
Peterson, & Ault, 1968). The science
of physics has a long history that
relates the events of the natural world
(e.g., the tides of the oceans, the
orbits of the planets) to the con-
trolled conditions of the experimental
laboratory. By comparison, behavior
analysis continues to fall short of a

PAUSING UNDER RATIO SCHEDULES 51

set of agreed-upon procedures that
characterize naturally occurring be-
haviors (Baron, Perone, & Galizio,
1991).

Notwithstanding these consider-
ations, there are several aspects of
pausing on ratio schedules that bear
at least a superficial resemblance to
the human problem of procrastina-
tion. In simple terms, to procrastinate
is to delay or postpone an action or
to put off doing something. Such
definitions seem to involve something
akin to the ratio pause in which an
individual pauses before completing a
ratio. Moreover, the two phenomena
appear to arise from similar causes,
in that the schedule parameters that
contribute to pausing (e.g., higher
ratios, smaller reinforcement magni-
tudes, greater response effort) are
analogous to the situational variables
that lead to procrastination (e.g.,
greater delay to reward, less reward,
greater task aversiveness; cf. Howell,
Watson, Powell, & Buro, 2006; Sené-
cal, Lavoie, & Koestner, 1997; Solo-
mon & Rothblum, 1984). The rela-
tion between pausing and procras-
tination has received some degree of
recognition in past publications (De-
renne, Richardson, & Baron, 2006;
Shull & Lawrence, 1998), but serious
efforts to model procrastination in
ratio-schedule terms remain to be
attempted.

We must remember, however, that
the variables that control the use of
the term procrastination may be very
different among different speakers. In
some instances, such variables may
be homologous with those that con-
trol the use of the terms postrein-
forcement or preratio pause. In this
case, interpreting procrastinative be-
haviors in terms of ratio pausing may
be justified. Conversely, when using
the term procrastination with nonhu-
mans (see Mazur, 1996, 1998), per-
haps we should follow Skinner’s lead
in his article, ‘‘‘Superstition’ in the
Pigeon’’ (1949), by putting ‘‘procras-
tination’’ in quotation marks, imply-
ing that we are speaking analogically.

Regardless of whether human behav-
ior is homologous to nonhuman (or
even human) performances in the
laboratory, knowledge of the vari-
ables that control pausing in the
animal laboratory has the potential
to modify the behaviors described as
procrastination.

Applied Analysis of Behavior

A third way that human behavior
can be approached is through the
direct application of animal-based
procedures in clinical settings. Behav-
iors typical of anxiety disorders and
depression as well as socially impor-
tant behaviors normally deficient
among autistic and mentally retarded
individuals can be cast within a
research framework. For example,
the clinician can describe problem
behaviors before the intervention is
introduced, and systematic records
can be maintained of the outcome of
the therapy. Applied behavior analy-
sis has led to notable accomplish-
ments in varied settings including
clinics, institutions, schools, and or-
ganizations. Studies have provided
convincing evidence that behavioral
interventions lead to therapeutic
changes that would not occur with-
out the treatment. But application
also has limitations from the stand-
point of experimental analysis. In
applied research, variables cannot be
manipulated solely to advance scien-
tific understanding. Perhaps it goes
without saying that the primary
concern must be the welfare of the
client. When this value comes into
conflict with scientific understanding,
scientific understanding must give
way.

Because some behaviors that we
call procrastination may arise from
causes similar to those characterized
as ratio pausing, the knowledge
gleaned from basic research on paus-
ing might be used to ameliorate such
behavior. To date, however, procras-
tination, as an area of clinical con-
cern within applied behavior analysis,

52 HENRY D. SCHLINGER et al.

has generated interest mostly among
behavioral educators concerned with
improving academic behaviors (Brooke
& Ruthven, 1984; Lamwers & Jaz-
winski, 1989; Wesp, 1986; Ziesat,
Rosenthal, & White, 1978). Studies
have shown that it is possible to reduce
procrastination, for example, through
self-reward or self-punishment proce-
dures (Green, 1982; Harrison, 2005;
Ziesat et al.). However, efforts of this
kind have not led to widespread adop-
tion, and little remains known about
how procrastination might be most
effectively treated. More important
for the present purpose, it appears that
there have been no treatments of
procrastination based on the results of
research on ratio pausing from the
animal (or human) laboratory.

Those who desire to reduce pro-
crastination may, thus, be interested
in research showing how normal
patterns of pausing on ratio sched-
ules can be reduced. Pause durations
will shorten, for example, when
reinforcement is contingent on com-
pletion of the whole ratio within a set
length of time (Zeiler, 1970), when
the pause alone must end before
some criterion duration (R. A. Wil-
liams & Shull, 1982), or when time-
out punishment is imposed as soon as
the pause exceeds some criterion
duration (Derenne & Baron, 2001).
There is even some evidence that
reductions in pausing can remain
long after punishment is withdrawn
(Derenne et al., 2006). Results from
the animal laboratory using time-out
punishment are encouraging insofar
as they show ways to reduce procras-
tination-like behavior. However,
comparable procedures remain to be
developed by applied researchers.
There are many unanswered ques-
tions. What reinforcers and punishers
would be used? How would delivery
of the consequences be programmed?
What role is played by the individual’s
history and personal characteristics?
Efforts to bridge the gap between
basic and applied research on this
issue are needed.

SUMMARY AND CONCLUSIONS

Ratio schedules of reinforcement
follow a simple rule. Reinforcement
is delivered after completion of a
given number of responses (the FR
schedule), a value that varies from
ratio to ratio but averages a ratio size
(the VR schedule), or a value that
progressively increases from ratio to
ratio (the PR schedule). An essential
feature of all three variants is that the
organism’s work output is equivalent
to the reinforcement yield. In other
words, response rates are a direct
function of reinforcement rates.

Although the procedures embodied
in the ratio rule appear straightfor-
ward, numerous experiments have
shown considerable complexity in
the outcomes. In the case of FR and
PR schedules, response rates typically
appear as a break-and-run pattern: A
pause follows delivery of the rein-
forcement followed by a high rate of
responding to the next reinforcer. The
duration of the pause has proven to be
influenced by a number of variables,
most notably the size of the ratio, the
magnitude of the reinforcer, and the
force of the response requirement.
Moreover, even though performances
on VR schedules often are described
as indicating little or no pausing,
research has shown that depending
on the distribution of the individual
ratios, such pausing can and does
occur and is influenced by some of the
same variables as the FR pause.

We have already considered the
possibility that pausing is a conse-
quence of the previous ratio. Howev-
er, pausing on FR schedules cannot
be attributed to such factors as time
needed to consume the reinforcer,
satiation, or fatigue. As a rule,
subjects typically pause much less
under equivalent-sized VR schedules
while they consume as many rein-
forcers and emit as many responses.
Moreover, using multiple FR FR
schedules, researchers have shown
that the duration of pausing does
not depend on the size of the previous

PAUSING UNDER RATIO SCHEDULES 53

ratio, thus effectively countering the
fatigue hypothesis. Identification of
the variables that control pausing
using simple FR schedules is difficult
because characteristics of successive
ratios, such as the size of the ratio,
are held constant. More informative
are the results of procedures that vary
the size of successive ratios using
multiple FR FR schedules that in-
clude discrete discriminative stimuli
that define the components. These
procedures have shown that pausing
is more a consequence of the upcom-
ing ratio than the preceding one. In
both cases, we can say that pausing is
more pronounced when the transi-
tion is from a more to a less favorable
contingency, that is, from a lower to
a higher ratio, from a larger to a
smaller reinforcer magnitude, or
from a lesser to a greater response-
force requirement.

Beginning with Ferster and Skin-
ner’s (1957) original studies of ratio
schedules, there has been debate
about the appropriate interpretation
of the results that we have described.
Of these, three interpretations war-
rant special attention: (a) Pausing is
the result of interacting processes of
inhibition and excitation; (b) pausing
is the outcome of a competition
between reinforcers scheduled by the
experimenter and reinforcers from
other sources; and (c) pausing avoids
the work needed to meet the ratio
requirement. Noteworthy is that such
views are primarily concerned with
what are usually termed molecular
effects; in other words, the models
represent moment-to-moment effects
on ratio performance, along the lines
originally described by Ferster and
Skinner (cf. Mazur, 1982).

Inhibitory and excitatory processes.
In this view, pausing reflects the joint
effect of the inhibition and the
excitation of responding (see Leslie,
1996). We related the well-document-
ed finding, obtained especially with
mixed FR FR schedules, that inhibi-
tion originates in the unconditioned
effects of the previously delivered

reinforcing stimulus (Harzem & Har-
zem, 1981; Lowe et al., 1974; Perone
& Courtney, 1992). However, it is
possible that more than one source of
inhibition exerts control. We also
mentioned Skinner’s (1938) sugges-
tion that conditioned inhibition is
responsible for pausing, in that the
delivery of one reinforcer may serve
as an SD for subsequent responding.
Regardless of the origins of inhibi-
tion, there may be several reasons
why pausing is kept in check. First,
the inhibitory aftereffects of rein-
forcement dissipate with time. Sec-
ond, the passage of time since the
start of the ratio is correlated with the
past delivery of reinforcers; thus,
excitation of responding increases
with time. Third, there is differential
reinforcement of responses that occur
soon after reinforcement insofar as
shorter pauses lessen the delay to the
next reinforcer.

Competing reinforcers. The second
account envisions a competition be-
tween concurrently available sources
of reinforcement. On the one hand,
subjects can work towards the rein-
forcer scheduled by the experimenter.
On the other hand, subjects can
obtain sources of reinforcement with-
in the experimental apparatus that
are not programmed by the experi-
menter. According to this view,
pausing occurs whenever subjects
choose an alternative reinforcer over
the scheduled one. Alternative rein-
forcers may be added to the operant
chamber, such as the opportunity to
drink water when food pellets are the
scheduled reinforcers, but more typ-
ical alternatives involve automatic
reinforcers inherent in grooming,
resting, and exploring (see Derenne
& Baron, 2002; Shull, 1979). Al-
though the efficacy of such alterna-
tives may be low by comparison with
the scheduled one, such reinforcers
are immediately available. Thus, as a
general principle, at the beginning of
the ratio, when the probability of
responding is lowest, subjects would
be expected to select unscheduled

54 HENRY D. SCHLINGER et al.

smaller–sooner reinforcers over the
scheduled larger–later one (cf. Rach-
lin & Green, 1972).

Work avoidance. The third account
focuses on aversive properties of
responding. A subject confronted
with the task of completing a sub-
stantial ratio of responding can pre-
clude an unfavorable situation by
pausing, that is, by escaping stimuli
that signal the response requirement,
thus avoiding having to respond.
Research reported by Azrin (1961)
and Thompson (1964) provided evi-
dence that, if given the opportunity,
subjects will avoid high FR require-
ments. In both experiments, subjects
were provided with concurrent access
to an FR schedule of reinforcement
and to the opportunity to suspend the
FR schedule (i.e., self-imposed time-
out). Time-out responses were fre-
quent at the beginning of the ratio
(the time at which remaining work
was greatest and pausing normally
occurred), even though they had the
effect of reducing reinforcement rates.
More recently, Perone (2003) showed
a similar pattern of escape responding
under multiple FR FR schedules.
Subjects not only paused longest
when a large reinforcer on one ratio
was followed by a smaller one on the
next, but they were also more likely to
make an escape response during
large–small transitions.

These three accounts (the inhibi-
tion–excitation model, the competing
reinforcer model, and the work-
avoidance model) rely on different
mechanisms to explain why pausing
occurs. However, it would be incor-
rect, or at least premature, to con-
clude that one view is more accurate
than the others. Each model address-
es a different factor that may con-
tribute to pausing, and these factors
need not be viewed as mutually
exclusive. Further, each can be used
to explain major findings from re-
search with ratio schedules of rein-
forcement, most notably the well-
established findings that pause dura-
tions increase with ratio size and

pause durations on FR schedules
generally exceed those on VR sched-
ules.

Consider the finding that pause
durations increase with ratio size.
According to the inhibition–excita-
tion model, the delivery of a rein-
forcer signals the beginning of a
period of time during which subse-
quent reinforcement is not immedi-
ately available. The extent to which
inhibitory processes depress respond-
ing is dependent on the delay to
reinforcement. An increase in the size
of the ratio necessarily increases the
minimum delay to reinforcement;
therefore, inhibition (and pausing)
increases as a function of ratio size.
The competing reinforcer model also
points to the delay to reinforcement
as a critical variable. In this case, an
increase in the delay to reinforcement
alters the balance of choice between
the delayed scheduled reinforcer and
immediately available alternative re-
inforcers. With greater delay, the
scheduled reinforcer becomes less
efficacious and subjects therefore will
more likely choose alternative sourc-
es of reinforcement early in the ratio.
The consequence of this shift in
choice is that pause durations are
lengthened.

For the work-avoidance model, the
critical consideration is that higher
ratios, of necessity, require increased
work. Insofar as increased work is
more aversive (i.e., it increases the
reinforcing value of escape or avoid-
ance behavior; cf. Azrin, 1961; Thomp-
son, 1964), higher ratios should be
accompanied by longer pausing. Re-
search with FR schedules has not been
able to show that either the time until
the reinforcer or the amount of re-
quired work alone is the critical factor
responsible for pausing. Killeen (1969)
obtained equivalent pause durations
under FR and FI schedules of rein-
forcement when the average interrein-
forcement interval was held constant
across schedules. However, others
have found that FR and FI pause
durations depart, suggesting that other

PAUSING UNDER RATIO SCHEDULES 55

aspects of the contingencies, such as
the relatively aversive work require-
ment on FR schedules, also have an
important role (Aparicio, Lopez, &
Nevin, 1995; Capehart, Eckerman,
Guilkey, & Shull, 1980; Lattal, Reilly,
& Kohn, 1998).

In the case of differences in paus-
ing between VR and FR schedules,
two models once again focus on the
time until reinforcement, whereas the
third addresses the amount of re-
quired work. A key finding is that
pauses under VR schedules are con-
trolled in part by the size of the
lowest possible ratio (Schlinger et al.,
1990). When the lowest possible ratio
is 1, pausing is almost nonexistent,
and when the ratio increases in size,
VR pausing begins to resemble FR
pausing in duration. Why should a
low minimum ratio reduce pausing?
According to the inhibition–excita-
tion model, occasional reinforcement
early in the ratio would reduce
inhibition (i.e., under VR schedules
the delivery of one reinforcer does
not clearly predict a period of subse-
quent reinforcement unavailability).
Under the competing reinforcer mod-
el, the control exerted by the sched-
uled reinforcer over behavior early in
the ratio is increased when reinforcers
are sometimes delivered after only
one or a few responses. Therefore,
subjects become less likely to select
alternative reinforcers over respond-
ing for the scheduled reinforcer. For
the work-avoidance model, it is the
response requirement that matters.
Subjects are less likely to put off
responding if there is a possibility
that the work requirement is very
small.

A problematic feature of all three
models is the variation in pausing
from ratio to ratio. Although re-
search on FR pausing shows overall
a direct relation between pause dura-
tion and ratio size (e.g., Felton &
Lyon, 1966; Powell, 1968) and an
inverse relation between pause dura-
tion and reinforcement magnitude
(e.g., Powell, 1969), cumulative re-

cords and frequency distributions of
pausing suggest that performances on
a typical ratio are minimally affected
by these variables. Subsequent re-
search has indicated that aggregated
data (most notably means) present an
inaccurate picture because means
tend to be disproportionately influ-
enced by the extreme scores of
skewed distributions (Baron & Her-
polsheimer, 1999). A plausible ac-
count of ratio-to-ratio variation relies
on the aforementioned inhibition–
excitation model. Thus, the extent
to which responding is regarded as
inhibited or excited should wax and
wane over time insofar as sequential
increases in pausing are counteracted
by decreases in pausing. However,
the expectation of orderly cycles of
increasing and decreasing pause du-
rations within sessions has not been
forthcoming thus far (Derenne &
Baron, 2001).

Another puzzle pertains to the
finding that the break-and-run pat-
tern persists despite extensive expo-
sure to the FR schedule. The conse-
quence is that overall reinforcement
rates engendered by responding are
reduced below optimal levels. By
comparison, extended exposure to
other schedules (e.g., FI) improves
response efficiency insofar as re-
sponse rates are reduced without
impairing reinforcement rate (see
Baron & Leinenweber, 1994). How-
ever, an analysis in terms of FR
pausing does not take into account
the potential role of alternative rein-
forcers. From this standpoint, the net
reinforcing consequence of respond-
ing must include reinforcement from
both sources. Interestingly, pausing
can be modified by imposing addi-
tional contingencies on the FR sched-
ule. For example, if the pause must
be shorter than some minimum
duration for reinforcement to be
delivered at the end of the ratio, then
subjects will typically pause no longer
than the criterion allows (R. A.
Williams & Shull, 1982). Further-
more, even after the additional con-

56 HENRY D. SCHLINGER et al.

tingencies are removed, subjects may
continue to pause less than they did
before the procedure started, suggest-
ing that forced exposure to a more
efficient response pattern may sensi-
tize subjects to overall reinforcement
rates (Derenne et al., 2006).

Finally, we addressed the issue of
the relevance of animal models for an
understanding of the impact of ratio
schedules of reinforcement on the
world of human affairs. We conclud-
ed that the general principles that
have emerged from our review, al-
though imperfect, shed light on hu-
man behavior. In particular, pausing
under ratio schedules may illuminate
the human problem of procrastina-
tion in the sense that procrastination
is influenced by the size of the
upcoming task, the relative difficulty
in performing it, and perhaps the
magnitude of the expected reinforcer.
In fact, based on the interpretation of
pausing under ratio schedules as an
avoidance response, some behaviors
we label as procrastination may
occur because they avoid the upcom-
ing task. This might also momentar-
ily increase the effectiveness of alter-
native reinforcers and might explain
why, when faced with a large effortful
task, we tend to find other things to
do even if those other activities
involve consequences that are typi-
cally not very effective reinforcers
(e.g., washing the dishes, etc.).

So, what can we conclude about
pausing under ratio schedules after
50 years of research? We can begin to
answer this question by addressing
schedule effects in general. As Zeiler
(1984) has written,

Suffice it to say that we still lack a coherent
explanation of why any particular schedule
has its specific effects on behavior. … Whether
the explanation has been based on interre-
sponse time, reinforcement, reinforcer fre-
quency, relations between previous and cur-
rent output, direct or indirect effects, or
whatever, no coherent and adequate theoret-
ical account has emerged. Forty years of
research has shown that a number of variables
must be involved—schedule performances
must be multiply determined—but they pro-

vide at best a sketchy picture and no clue as to
interactive processes. (p. 489)

So it is with the variables that in-
fluence pausing under ratio sched-
ules. They are likely numerous and
complex. Nonetheless, our review has
revealed that amid the variability,
there is a consistent orderliness across
myriad experiments over the past
50 years. Zeiler (1984) pessimistically
concluded that any attempt to un-
derstand schedules of reinforcement
at a more molecular level is doomed
to fail ‘‘because of the complexity of
the interactions, and also because
many of the controlling variables
arise indirectly through the interplay
of ongoing behavior and the contin-
gencies’’ (p. 491). But as we have
discovered, as a result of innovative
methods and probing research ques-
tions, researchers are moving closer
to a more fundamental understand-
ing of why pausing under ratio
schedules occurs.

REFERENCES

Alling, K., & Poling, A. (1995). The effects of
differing response-force requirements on
fixed-ratio responding of rats. Journal of
the Experimental Analysis of Behavior, 63,
331–346.

Aparicio, C. F., Lopez, F., & Nevin, J. A.
(1995). The relation between postreinforce-
ment pause and interreinforcement interval
in conjunctive and chain fixed-ratio fixed-
time schedules. The Psychological Record,
45, 105–125.

Azrin, N. H. (1961). Time-out from positive
reinforcement. Science, 133, 382–383.

Baron, A., & Derenne, A. (2000). Progressive-
ratio schedules: Effects of later schedule
requirements on earlier performances. Jour-
nal of the Experimental Analysis of Behavior,
73, 291–304.

Baron, A., & Herpolsheimer, L. R. (1999).
Averaging effects in the study of fixed-ratio
response patterns. Journal of the Experi-
mental Analysis of Behavior, 71, 145–153.

Baron, A., & Leinenweber, A. (1994). Molec-
ular and molar aspects of fixed-interval
performance. Journal of the Experimental
Analysis of Behavior, 61, 11–18.

Baron, A., Mikorski, J., & Schlund, M.
(1992). Reinforcement magnitude and paus-
ing on progressive-ratio schedules. Journal
of the Experimental Analysis of Behavior, 58,
377–388.

PAUSING UNDER RATIO SCHEDULES 57

Baron, A., Perone, M., & Galizio, M. (1991).
Analyzing the reinforcement process at the
human level: Can application and behav-
ioristic interpretation replace laboratory
research? The Behavior Analyst, 14, 93–105.

Bijou, S. W., Peterson, R. F., & Ault, M. H.
(1968). A method to integrate descriptive
and experimental field studies at the level of
data and empirical concepts. Journal of
Applied Behavior Analysis, 1, 175–191.

Blakely, E., & Schlinger, H. (1988). Determi-
nants of pausing under variable-ratio sched-
ules: Reinforcer magnitude, ratio size, and
schedule configuration. Journal of the Ex-
perimental Analysis of Behavior, 50, 65–73.

Brooke, R. R., & Ruthven, A. J. (1984). The
effects of contingency contracting on stu-
dent performance in a PSI class. Teaching of
Psychology, 11, 87–89.

Capehart, G. W., Eckerman, D. A., Guilkey,
M., & Shull, R. L. (1980). A comparison of
ratio and interval reinforcement schedules
with comparable interreinforcement times.
Journal of the Experimental Analysis of
Behavior, 34, 61–76.

Crossman, E. K. (1968). Pause relationships in
multiple and chained fixed-ratio schedules.
Journal of the Experimental Analysis of
Behavior, 11, 117–126.

Crossman, E. K., Bonem, E. J., & Phelps, B. J.
(1987). A comparison of response patterns
on fixed-, variable-, and random-ratio
schedules. Journal of the Experimental
Analysis of Behavior, 48, 395–406.

Crossman, E. K., & Silverman, L. T. (1973).
Altering the proportion of components in a
mixed fixed-ratio schedule. Journal of the
Experimental Analysis of Behavior, 20,
273–279.

Derenne, A., & Baron, A. (2001). Time-out
punishment of long pauses on fixed-ratio
schedules of reinforcement. The Psycholog-
ical Record, 51, 39–51.

Derenne, A., & Baron, A. (2002). Preratio
pausing: Effects of an alternative reinforcer
on fixed-and variable-ratio responding.
Journal of the Experimental Analysis of
Behavior, 77, 272–282.

Derenne, A., Richardson, J. V., & Baron, A.
(2006). Long-term effects of suppressing the
preratio pause. Behavioural Processes, 72,
32–37.

Ellis, N. R., Barnett, C. D., & Pryer, M. W.
(1960). Operant behavior in mental defec-
tives: Exploratory studies. Journal of the
Experimental Analysis of Behavior, 3, 63–69.

Felton, M., & Lyon, D. O. (1966). The post-
reinforcement pause. Journal of the Exper-
imental Analysis of Behavior, 9, 131–134.

Ferster, C. B., & Skinner, B. F. (1957).
Schedules of reinforcement. New York:
Appleton-Century-Crofts.

Galuska, C. M., Wade-Galuska, T., Woods, J.
H., & Winger, G. (2007). Fixed-ratio
schedules of cocaine self-administration in
rhesus monkeys: Joint control of responding

by past and upcoming doses. Behavioural
Pharmacology, 18, 171–175.

Green, L. (1982). Minority students’ self-
control of procrastination. Journal of Coun-
seling Psychology, 6, 636–644.

Griffiths, R. R., & Thompson, T. (1973). The
post-reinforcement pause: A misnomer. The
Psychological Record, 23, 229–235.

Harrison, H. C. (2005). The three-contingency
model of self-management Unpublished doc-
toral disseration, Western Michigan Uni-
versity.

Harzem, P., & Harzem, A. L. (1981). Dis-
crimination, inhibition, and simultaneous
association of stimulus properties. In P.
Harzem & M. D. Zeiler (Eds.), Predictabil-
ity, correlation, and contiguity (pp. 81–128).
Chichester, England: Wiley.

Harzem, P., Lowe, C. F., & Davey, G. C. L.
(1975). Aftereffects of reinforcement mag-
nitude: Dependence on context. Quarterly
Journal of Experimemental Psychology, 27,
579–584.

Hillman, D., & Bruner, J. S. (1972). Infant
sucking in response to variations in sched-
ules of feeding reinforcement. Journal of
Experimental Child Psychology, 13, 240–
247.

Hodos, W. (1960). Progressive ratio as a
measure of reward strength. Science, 34,
943–944.

Hodos, W., & Kalman, G. (1963). Effects of
increment size and reinforcer volume on
progressive ratio performance. Journal of
the Experimental Analysis Behavior, 6, 387–
392.

Holland, J. G. (1958). Human vigilance: The
rate of observing an instrument is controlled
by the schedule of signal detections. Science,
128, 61–67.

Howell, A. J., Watson, D. C., Powell, R. A., &
Buro, K. (2006). Academic procrastination:
The pattern and correlates of behavioural
postponement. Personality and Individual
Differences, 40, 1519–1530.

Hutchinson, R. R., & Azrin, N. H. (1961).
Conditioning of mental-hospital patients to
fixed-ratio schedules of reinforcement. Jour-
nal of the Experimental Analysis of Behavior,
4, 87–95.

Killeen, P. (1969). Reinforcement frequency
and contingency as factors in fixed-ratio
behavior. Journal of the Experimental Anal-
ysis of Behavior, 12, 391–395.

Lamwers, L. L., & Jazwinski, C. H. (1989). A
comparison of three strategies to reduce
student procrastination in PSI. Teaching of
Psychology, 16, 8–12.

Lattal, K. A. (1991). Scheduling positive
reinforcers. In I. H. Iversen & L. A. Lattal
(Eds.), Experimental analysis of behavior
(pp. 87–134). New York: Elsevier.

Lattal, K. A., Reilly, M. P., & Kohn, J. P.
(1998). Response persistence under ratio
and interval reinforcement schedules. Jour-

58 HENRY D. SCHLINGER et al.

nal of the Experimental Analysis of Behavior,
70, 165–183.

Leslie, J. C. (1996). Principles of behaviour
analysis. Amsterdam: Harwood Academic
Publishers.

Long, E. R., Hammack, J. T., May, F., &
Campbell, B. J. (1958). Intermittent rein-
forcement of operant behavior in children.
Journal of the Experimental Analysis of
Behavior, 1, 315–339.

Lowe, C. F., Davey, G. C. L., & Harzem, P.
(1974). Effects of reinforcement magnitude
on interval and ratio schedules. Journal of
the Experimental Analysis Behavior, 22,
553–560.

Malott, R. W. (1966). The effects of prefeed-
ing in plain and chained fixed ratio sched-
ules of reinforcement. Psychonomic Science,
4, 285–287.

Mazur, J. E. (1982). A molecular approach to
ratio schedule performance. In M. L.
Commons, R. J. Herrnstein, & H. Rachlin
(Eds.), Quantitative analyses of behavior:
Vol. 2. Matching and maximizing accounts
(pp. 79–110). Cambridge, MA: Ballinger.

Mazur, J. E. (1996). Procrastination by
pigeons: Preference for larger, more delayed
work requirements. Journal of the Experi-
mental Analysis of Behavior, 65, 159–171.

Mazur, J. E. (1998). Procrastination by
pigeons with fixed-interval response require-
ments. Journal of the Experimental Analysis
of Behavior, 69, 185–197.

Mazur, J. E. (2006). Learning and behavior.
Upper Saddle River, NJ: Pearson Prentice
Hall.

Mazur, J. E., & Hyslop, M. E. (1982). Fixed-
ratio performance with and without a
postreinforcement timeout. Journal of the
Experimental Analysis Behavior, 38, 143–
155.

McMillan, J. C. (1971). Percentage reinforce-
ment of fixed-ratio and variable-interval
performances. Journal of the Experimental
Analysis Behavior, 15, 297–302.

Morse, W. H., & Dews, P. B. (2002).
Foreword to Schedules of Reinforcement.
Journal of the Experimental Analysis of
Behavior, 77, 313–317.

Orlando, R., & Bijou, S. W. (1960). Single and
multiple schedules of reinforcement in
developmentally retarded children. Journal
of the Experimental Analysis of Behavior, 3,
339–348.

Perone, M. (2003). Negative effects of positive
reinforcement. The Behavior Analyst, 26,
1–14.

Perone, M., & Courtney, K. (1992). Fixed-
ratio pausing: Joint effects of past reinforcer
magnitude and stimuli correlated with
upcoming magnitude. Journal of the Exper-
imental Analysis of Behavior, 57, 33–46.

Perone, M., Perone, C. L., & Baron, A.
(1987). Inhibition by reinforcement: Effects
of reinforcer magnitude and timeout on

fixed ratio pausing. The Psychological Rec-
ord, 37, 227–238.

Powell, R. W. (1968). The effects of small
sequential changes in fixed-ratio size upon
the post-reinforcement pause. Journal of the
Experimental Analysis of Behavior, 11,
589–593.

Powell, R. W. (1969). The effect of reinforce-
ment magnitude upon responding under
fixed-ratio schedules. Journal of the Exper-
imental Analysis of Behavior, 12, 605–608.

Priddle-Higson, P. J., Lowe, C. F., & Harzem,
P. (1976). Aftereffects of reinforcement on
variable-ratio schedules. Journal of the Ex-
perimental Analysis of Behavior, 25, 347–354.

Rachlin, H., & Green, L. (1972). Commit-
ment, choice and self-control. Journal of the
Experimental Analysis of Behavior, 17, 15–22.

Sanders, R. M. (1969). Concurrent fixed-ratio
fixed-interval performances in adult human
subjects. Journal of the Experimental Anal-
ysis of Behavior, 12, 601–604.

Schlinger, H., Blakely, E., & Kaczor, T.
(1990). Pausing under variable-ratio sched-
ules: Interaction of reinforcer magnitude,
variable-ratio size, and lowest ratio. Journal
of the Experimental Analysis of Behavior, 53,
133–139.

Senécal, C., Lavoie, K., & Koestner, R.
(1997). Trait and situational factors in
procrastination: An interactional model.
Journal of Social Behavior and Personality,
12, 889–903.

Shull, R. L. (1979). The post-reinforcement
pause: Some implications for the correla-
tional law of effect. In M. D. Zeiler & P.
Harzem (Eds.), Advances in analysis of
behaviour: Vol. 1. Reinforcement and the
organization of behaviour (pp. 193–221).
New York: Wiley.

Shull, R. L., & Lawrence, P. S. (1998).
Reinforcement: Schedule performance. In
K. A. Lattal & M. Perone (Eds.), Handbook
of research methods in human operant
behavior (pp. 95–129). New York: Plenum.

Sidman, M. (1960). Tactics of scientific
research. New York: Basic Books.

Sidman, M., & Stebbins, W. C. (1954).
Satiation effects under fixed-ratio schedules
of reinforcement. Journal of Comparative
and Physiological Psychology, 47, 114–116.

Skinner, B. F. (1938). The behavior of organ-
isms. Englewood Cliffs, NJ: Prentice Hall.

Skinner, B. F. (1949). ‘‘Superstition’’ in the
pigeon. Journal of Experimental Psychology,
38, 168–172.

Skinner, B. F. (1953). Science and human
behavior. New York: Macmillan.

Solomon, L. J., & Rothblum, E. D. (1984).
Academic procrastination: Frequency and
cognitive-behavioral correlates. Journal of
Counseling Psychology, 31, 503–509.

Steel, P. (2007). The nature of procrastination:
A meta-analytic and theoretical review of
quintessential self-regulatory failure. Psy-
chological Bulletin, 133, 65–94.

PAUSING UNDER RATIO SCHEDULES 59

Thomas, J. V. (1974). Changes in progressive-
ratio performance under increased pressures
of air. Journal of the Experimental Analysis
of Behavior, 21, 553–562.

Thompson, D. M. (1964). Escape from SD

associated with reinforcement. Journal of the
Experimental Analysis of Behavior, 7, 1–8.

Wade-Galuska, T., Perone, M., & Wirth, O.
(2005). Effects of past and upcoming
response-force requirements on fixed-ratio
pausing. Behavioural Processes, 68, 91–95.

Wallace, I. (1977). Self-control techniques of
famous novelists. Journal of Applied Behav-
ior Analysis, 10, 515–525.

Wallace, R. F., & Mulder, D. W. (1973). Fixed-
ratio responding with human subjects. Bulle-
tin of the Psychonomic Society, 1, 359–362.

Weiner, H. (1964). Response cost and fixed-
ratio performance. Journal of the Experi-
mental Analysis of Behavior, 7, 79–81.

Weiner, H. (1966). Preference and switching
under ratio contingencies with humans.
Psychological Reports, 18, 239–246.

Weisberg, P., & Fink, E. (1966). Fixed ratio
and extinction performance of infants in
the second year of life. Journal of the Ex-
perimental Analysis of Behavior, 9, 105–
109.

Wesp, R. (1986). Reducing procrastination
through required course involvement.
Teaching of Psychology, 13, 128–130.

Williams, D. C., Saunders, K. J., & Perone,
M. (in press). Extended pausing in human
subjects on multiple fixed-ratio schedules
with varied reinforcer magnitude and re-
sponse requirements. Journal of the Exper-
imental Analysis of Behavior.

Williams, R. A., & Shull, R. L. (1982).
Differential reinforcement of short post-
reinforcement pauses on a fixed-ratio sched-
ule. Behaviour Analysis Letters, 2, 171–180.

Zeiler, M. D. (1970). Time limits for complet-
ing fixed ratios. Journal of the Experimental
Analysis of Behavior, 14, 275–286.

Zeiler, M. D. (1984). The sleeping giant:
Reinforcement schedules. Journal of the
Experimental Analysis of Behavior, 42, 485–
493.

Zeiler, M. D., & Kelly, C. A. (1969). Fixed-
ratio and fixed-interval schedules of cartoon
presentation. Journal of Experimental Child
Psychology, 8, 306–313.

Ziesat, H. A., Rosenthal, T. L., & White, G.
M. (1978). Behavioral self-control in treat-
ing procrastination of studying. Psycholog-
ical Reports, 42, 59–69.

60 HENRY D. SCHLINGER et al.

Theevents that precede operant behavior and the consequences that follow may be arranged in
many different ways. A schedule of reinforcement describes this arrangement. In other words,
a schedule of reinforcement is a prescription that states how and when discriminative stimuli and

D

turn on a lamp, which is followed by illumination of the room.

to have little in common. Humans are very complex organisms—they build cities, write books,

cannot do. In addition, pressing a lever for food appears to be very different from switching on a
light. Nonetheless, performances controlled by schedules of reinforcement have been found to be
remarkably similar across different organisms, behavior, and reinforcers. When the same schedule
of reinforcement is in effect, a child who solves math problems for teacher approval may generate a
pattern of behavior comparable to a bird pecking a key for water.

and represents the most extensive study of this critical independent variable of behavior science.
Today, few studies focus directly on simple, basic schedules of reinforcement. The lawful rela

tions that have emerged from the analysis of reinforcement schedules, however, remain central to
the science of behavior—being used in virtually every study reported in the Journal of the Exper

imental Analysis of Behavior. The knowledge that has accumulated about the effects of schedules

Schedules of reinforcement have regular, orderly, and profound effects on the organism’s rate of
responding. The importance of schedules of reinforcement cannot be overestimated. No description,

account, or explanation of any operant behavior of any organism is complete unless the schedule of

of schedules is central to the study of behavior. . . . Behavior that has been attributed to the supposed
drives, needs, expectations, ruminations, or insights of the organism can often be related much more
exactly to regularities produced by schedules of reinforcement.

Modern technology has made it possible to analyze performance on schedules of reinforcement
in increasing detail. Nonetheless, early experiments on schedules remain important. The experi-
mental analysis of behavior is a progressive science in which observations and experiments build
on one another. In this chapter, we present early and later research on schedules of reinforcement.
The analysis of schedule performance ranges from a global consideration of cumulative records to
a detailed analysis of the time between responses.


lated and integrated to provide a general account of the behavior of organisms. Often, simple ani-
mals in highly controlled settings are studied. The strategy is to build a comprehensive theory of
behavior that rests on direct observation and experimentation.

that go substantially beyond the data. Such speculations include reference to the organism’s mem-
ory, thought processes, expectations, and undocumented accounts based on presumed physiological
states. For example, a behavioral account of schedules of reinforcement provides a detailed descrip-
tion of how behavior is altered by contingencies of reinforcement. One such account is based on
evidence that a particular schedule sets up differential reinforcement of the time between responses


grated into larger units of performance according to the molar or macro contingencies of reinforce-

to hypothetical cognitive events or presumed physiological processes.
Recall that behavior analysts study the behavior of organisms, including people, for its own

sake. Behavior is not studied to make inferences about hypothetical mental states or real phys-
iological processes. Although most behaviorists acknowledge and emphasize the importance of
biology and neurophysiological processes, they focus more on the interplay of behavior with the
environment during the lifetime of an organism. Of course, direct analysis of neurophysiology of
animals provides essential details about how behavior is changed by the operating contingencies of
reinforcement and behavioral neuroscientists currently are providing many of these details, as we
discuss throughout this textbook.

Contemporary behavior analysis continues to build on previous research. The extension of
behavior principles to more complex processes and especially to human behavior is of primary
importance. The analysis, however, remains focused on the environmental conditions that control
the behavior of organisms. Schedules of reinforcement concern the arrangement of environmental
events that regulate behavior. The analysis of schedule effects is currently viewed within a bio-
logical context. In this analysis, biological factors play several roles. One way in which biology

that function as reinforcement and discriminative stimuli. Biological variables may also constrain

biological sciences progress, an understanding of biological factors becomes increasingly central to
a comprehensive theory of behavior.

con-
tingency of reinforcement D r

period, behavior typically settles into a consistent or steady-state performance
It may take many experimental sessions before a particular pattern emerges, but once it does, the

performance in his book, The Behavior of Organisms
printing of that book, Skinner writes that “the cumulative records . . . purporting to show orderly
changes in the behavior of individual organisms, occasioned some surprise and possibly, in some

one of these patterns. For example, a hungry rat might be required to press a lever 10 times to get a

food pellet. Following reinforcement, the animal has to make another 10 responses to produce the
next bit of food, then 10 more responses. In industry, this requirement is referred to as piece rate
and the schedule has characteristic effects on the job performances of the workers. When organisms

break-and-run pattern
of behavior often develops. Responses required by the schedule are made rapidly and result in rein-
forcement. A pause in responding follows each reinforcement, followed by another quick burst of

over again and occurs even when the ratio size of the schedule is changed.

In everyday life, behavior is often reinforced on an intermittent basis. On an intermittent schedule
of reinforcement, an operant is reinforced occasionally rather than each time it is emitted. Every
time a child cries, she is not reinforced with attention. Each time a predator hunts, it is not success-
ful. When you dial the number for airport information, sometimes you get through, but often the
exchange is busy. Buses do not immediately arrive when you go to a bus stop. It is clear that per-
sistence is often essential for survival or achievement of success; thus, an account of perseverance
on the basis of the maintaining schedule of reinforcement is a major discovery. In concluding his

It is impossible to study behavior either in or outside the laboratory without encountering a schedule
of reinforcement: whenever behavior is maintained by a reinforcing stimulus, some schedule is in

schedules operate will it be possible to understand the effects of reinforcing stimuli on behavior.

a seed or insect. These bits of food occur only every now and then, and the distribution of reinforce-
ment is the schedule that maintains the animal’s foraging behavior. If you were watching this bird
hunt for food, you would probably see the animal’s head bobbing up and down. You might also see
the bird pause and look around, change direction, and move to a new spot. This sort of activity is

however, does not explain it. Although evolution and biology certainly play a role in this foraging
episode, perhaps as importantly, so does the schedule of food reinforcement.


aging. In this arrangement, pigeons were able to choose between two food patches by pecking keys

“Progressive-Ratio Schedules” in this chapter; and see discussion of concurrent schedules in Chap-

decreased and more responses were required to produce bits of food—a progressively increasing ratio

would be expected, this change in reinforcement density up and down generated switching back and
forth between the two patches. To change patches, however, the bird had to peck a center key—simu-

all contributed to the changing patches. This experiment depicts an animal model of foraging—using
schedules of reinforcement to simulate natural contingencies operating in the wild.

taken. Experimenters risk misinterpreting results when they ignore possible schedule effects. This
is because schedules of reinforcement may interact with a variety of other independent variables,


ous punishment, but otherwise behavior on the schedule remains the same. A possible conclusion is

is not suppressed by contingent aversive stimulation.
This conclusion is not completely correct, as further experiments have shown that punishment has


ent result occurs. On this kind of schedule, when each operant is punished, the pattern of behavior remains
the same and the rate of response declines. Obviously, conclusions concerning the effects of punishment
on pattern and rate of response cannot be drawn without considering the schedule of reinforcement main-
taining the behavior. That is, the effects of punishment depend on the schedule of reinforcement. These

check is the schedule of reinforcement maintaining the behavior labeled as illegal.
In summary, schedules of reinforcement produce reliable response patterns, which are con-

sistent across different reinforcers, organisms, and operant responses. In our everyday experience,
schedules of reinforcement are so common that we take such effects for granted. We wait for a taxi
to arrive, line up at a store to have groceries scanned, or solve 10 math problems for homework.
These common episodes of behavior and environment interaction illustrate schedules of reinforce-
ment operating in our everyday lives.

Continuous reinforcement, or CRF, is probably the simplest schedule of reinforcement. On this
schedule, every operant required by the contingency is reinforced. For example, every time a hungry
pigeon pecks a key, food is presented. When every operant is
followed by reinforcement, responses are emitted relatively
quickly depending upon the time to consume the reinforcer.
The organism continues to respond until it is satiated. Sim-

animal is again deprived of reinforcement and exposed to a
CRF schedule, the same pattern of responding followed by
satiation is repeated. Figure 5.5 is a typical cumulative record
of performance on continuous reinforcement. As mentioned
in Chapter 4, the typical vending machine delivers products

Conjugate reinforcement is a type of CRF schedule in
which properties of reinforcement, including the rate, ampli-
tude, and intensity of reinforcement, are tied to particular

energetic, and high-rate operant crying by infants is often

by atypically developing children, are automatically reinforced by perceptual and sensory effects

Additional research has used college students responding to clarify pictures on a computer monitor;
in this study, students’ responding was sensitive to change in intensity of the visual stimulus, rate of

resistance to extinction is a mea-
sure of persistence when reinforcement is discontinued. This perseverance can be measured in several
ways. The most obvious way to measure resistance to extinction is to count the number of responses
and measure the length of time until operant level is reached. Again, remember from Chapter 4 that

operant level refers to the rate of a response before behavior is reinforced. For example, a laboratory
rat could be placed in an operant chamber with no explicit contingency of reinforcement in effect. The
number of times the animal presses the lever during a 2-h exploration of the chamber is a measure
of operant level, or in this case baseline. Once extinction is in effect, measuring the time taken and
number of responses made until operant level is attained is the best gauge of resistance to extinction.

Although continuing extinction until operant level is obtained provides the best measure of
behavioral persistence, this method requires considerable time and effort. Thus, arbitrary measures
that take less time are usually used. Resistance to extinction may be estimated by counting the

reinforcement could be discontinued and the number of responses made in three daily 1-h sessions
counted. Another index of resistance to extinction is based on how fast the rate of response declines
during unreinforced sessions. The point at which no response occurs for 5 min may be used to index
resistance. The number of responses and time taken to that point are used as indicators of behavioral
persistence or resistance to extinction. The important criterion for any method is that it must be
quantitatively related to extinction of responding.

In this experiment, birds were trained on CRF and two intermittent schedules that provided reinforce-
ment for pecking a key. The number of extinction responses that the animals made during three daily
sessions of nonreinforcement was then counted. Basically, Hearst found that the birds made many more
extinction responses after training on an intermittent schedule than after continuous reinforcement.

On CRF schedules, the form or topography of response becomes stereotypical. In a classic study,

topography on a CRF schedule. In this study, rats were required to poke their noses anywhere along

gency, the animals frequently responded at the same position on the slot. Only when the rats were

Further research with pigeons suggests that response variability may be inversely related to the rate of
reinforcement. In other words, as more and more responses are reinforced, less and less variation occurs in the

reinforced pigeons for pecking on an intermittent
schedule. The birds pecked at a horizontal strip
and were occasionally reinforced with food. When
some responses were reinforced, most of the birds
pecked at the center of the strip—although they
were not required to do so. During extinction, the
animals made fewer responses to the center and
more to other positions on the strip. Eckerman and


quent study, also with pigeons. They varied the
rate of reinforcement and compared response vari-
ability under CRF, intermittent reinforcement, and
extinction. Responses were stereotypical on CRF
and became more variable when the birds were on
extinction or on an intermittent schedule.


ing as reinforcement becomes less frequent or predictable. When a schedule of reinforcement is
changed from CRF to intermittent reinforcement, the rate of reinforcement declines and response
variability increases. A further change in the rate of reinforcement occurs when extinction is started.
In this case, the operant is no longer reinforced and response variation is maximal. The general
principle appears to be “When things no longer work, try new ways of behaving.” Or, as the saying

When solving a problem, people usually use a solution that has worked in the past. When
the usual solution does not work, most people—especially those with a history of reinforcement
for response variability and novelty—try novel approaches to problem solving. Suppose that

inventive when reinforcement is withheld after a period of success. This increase in topographic
variability during extinction after a period of reinforcement has been referred to as resurgence

In summary, CRF is the simplest schedule of positive reinforcement. On this schedule, every
response produces reinforcement. Continuous reinforcement produces weak resistance to extinction
and generates stereotypical response topographies. Resistance to extinction and variation in form of
response both increase on extinction and intermittent schedules.

On intermittent schedules of reinforcement, some rather than all responses are reinforced. Ratio
schedules are response based—these schedules are set to deliver reinforcement following a pre-

number of responses required for reinforcement.
Interval schedules pay off when one response is
made after some amount of time has passed. Inter-

number of responses have occurred, or after a con-
stant amount of time has passed. On variable sched-
ules, response and time requirements vary from one
reinforcer to the next. Thus, there are four basic

and variable interval. In this section, we describe


terns that they produce. We also present an analysis
of some of the reasons for the effects produced by
these basic schedules.

A

D

SD is sensory stimulation arising from the operant chamber; the response is a lever press and food

presented. After reinforcement, the returning arrow indicates that another 25 responses are required
to again produce reinforcement.

should remind you that Mechner notation describes the independent variable, not what the organ-
ism does. Indeed, FR 100,000,000 could be easily programmed, but this schedule is essentially an
extinction contingency because the animal probably never will complete the response requirement
for reinforcement.

In 1957, Ferster and Skinner described the FR schedule and the characteristic effects, patterns,
and rates, along with cumulative records of performance on about 15 other schedules of reinforce-

ment. Their observations remain valid after
literally thousands of replications: FR sched-
ules produce a rapid run of responses, fol-
lowed by reinforcement, and then a pause in
responding –

ratio is presented in Figure 5.9. The record

very small FR values, as shown by Crossman,

run of responses

known as break and run.
During extinction, the break-and-run

pattern shows increasing periods of paus-
ing followed by high rates of response. In a
cumulative record of a pigeon’s performance

reinforcement comes to dominate the record.

often called the postreinforcement pause
(PRP), to indicate where it occurs. The pause

in responding after reinforcement does not occur because the organism is consuming the food.

shown that the moment of reinforcement contributes to the length of the PRP, but is not the only

Detailed investigations of PRP on FR schedules indicate that the upcoming ratio requirement
is perhaps more critical. As the ratio requirement increases, longer and longer pauses appear in the
cumulative record. At extreme ratios there may be almost no responding. If responding occurs at all,
the animal responds at high rates even though the number of responses per reinforcement is very


pleted response requirements on FR pausing. The number of responses required and the size of the

pause a “post”-reinforcement event accurately locates the pause, but the upcoming requirements
exert predominant control over the PRP. Thus, contemporary researchers often refer to the PRP as a
preratio pause

Conditioned reinforcers such as money, praise, and successful completion of a task also pro-

of 10 math problems to complete for a homework assignment. A good bet is that you would solve
10 problems, and then take a break before starting on the next set. When constructing a sun deck,
one of the authors bundled nails into lots of 50 each. This had an effect on the “nailing behavior”
of friends who were helping to build the deck. The response pattern that developed was to put in 50

nailing again. In other words, this simple scheduling of the nails generated a break-and-run pattern
typical of FR reinforcement.

These examples of FR pausing suggest that the analysis of FR schedules has relevance for
human behavior. We often talk about procrastination and people who put off or postpone doing
things. It is likely that some of this delay in responding is similar to the pausing induced by the ratio

low or no productivity. Human procrastination may be modeled by animal performance on ratio
schedules; translational research linking human productivity to animal performance on ratio sched-

the economics of work. Researchers in behavioral economics often design experiments using FR

The equal cost assumption holds that each response or unit toward completion of the ratio on an FR
schedule is emitted with equal force or effort—implying that the cost of response does not change
as the animal completes the ratio. But evidence is mounting that the force of response changes as

Variable-ratio (VR)
required for reinforcement changes after each reinforcer is presented. A variable-ratio schedule is

average number of responses

5 times, then 15, 7, 3, and 20 times. Adding these response requirements for a total of 50 and

symbol VR in Figure 5.10 indicates that the
number of responses required for any one
reinforcer is variable. Other than this change,

In general, ratio schedules produce a high
rate of response. When VR and FR schedules
are compared, responding is typically faster on
VR. One reason for this is that pausing after

when the ratio contingency is changed from

that the PRP does not occur because the animal

on VR does not pause as many times, or for as
long, after reinforcement. When VR schedules
are not excessive, PRPs do occur, although
these pauses are typically smaller than those


ure 5.11 portrays a typical pattern of response
on a VR schedule of positive reinforcement.

A VR schedule with a low mean ratio can
contain some very small ratio requirements.
For example, on a VR 10 schedule there cannot
be many ratio requirements above 20 responses
because, to offset those high ratios and average
10, there will have to be many very low ratios.
It is the occasional occurrence of a reinforcer

right after another reinforcer, the short runs to reinforcement, that reduces the likelihood of pausing

have fewer short ratios following one another and typically generate longer PRPs.
The change from VR reinforcement to extinction initially shows little or no change in rate of

part of the record shows long pausing and short bursts of responses at a rate similar to the original
VR 110 performance. The pauses become longer and longer and eventually all responding stops, as

An additional issue concerning VR schedules is that the number of responses for reinforcement
is unpredictable, but it is not random. In fact, the sequence repeats after all the programmed ratios
have been completed and, on some VR schedules, short ratios may occur more frequently than with
a random sequence. A schedule with a pseudo-random pattern of response to reinforcement values

RR schedules resembles that on a VR schedule, but these probabilistic schedules “lock you in” to
high rates of response, as in gambling, by early runs of payoffs and by the pattern of unreinforced

In everyday life, variability and probability are routine. Thus, ratio schedules involving prob-

laboratory. You may have to hit one nail three times to drive it in, and the next nail may take six


ment. A batter with a .300 average gets 3 hits for 10 times at bat on average, but nothing guarantees
a hit for any particular time at bat. The schedule depends on a complex interplay among conditions
set by the pitcher and the skill of the batter.

On

ment. Following reinforcement, another 90-s period goes into effect, and after this time has passed
another response will produce reinforcement. It is important to note that responses made before the

time FT
time. This is also referred to as a response-in-
dependent schedule. [Unless otherwise speci-

is required on whatever schedule is in effect.]
When organisms are exposed to interval

contingencies, and they have no way of tell-
ing time, they typically produce many more
responses than the schedule requires. Fixed-in-
terval schedules produce a characteristic
steady-state pattern of responding. There is a

probe responses, followed by more and more
rapid responding to a constant high rate as the
interval times out. This pattern of response is
called scalloping. Figure 5.13 is an idealized
cumulative record of FI performance. Each

into three distinct classes—the PRP, followed
by a period of gradually increasing rate, and

Suppose that you have volunteered to be
in an operant experiment. You are brought into
a small room, and on one wall there is a lever
with a cup under it. Other than those objects,
the room is empty. You are not allowed to keep
your watch while in the room, and you are told,
“Do anything you want.” After some time, you
press the lever to see what it does. Ten dollars
fall into the cup. A good prediction is that you
will press the lever again. You are not told this,


ing around or doing anything else, the interval is timing out. You check out the contingency by making

even more time has passed. As the interval continues to time out, the probability of reinforcement
increases and your responses are made faster and faster. This pattern of responding is described by the

Following considerable experience with FI 5 min, you may get very good at judging the time
period. In this case, you would wait out the interval and then emit a burst of responses. Perhaps you

almost elapsed. This kind of mediating behavior may develop after experience with FI schedules

break-and-run
Humans use clocks and watches to keep track of time. Based on this observation, Ferster and Skin-

for pigeons was a light that grew in size as the FI interval ran out. The birds produced FI scallops that
were much more uniform than without a clock, showing the control exerted by a timing stimulus.

following reinforcement a high response rate occurred, leading to a pause at the end of the interval. The
FI contingencies, however, quickly overrode the stimulus control by the reverse clock, shifting the pat-

to the schedule, behavior eventually conforms to the schedule rather than the controlling stimulus.
In everyday life, FI schedules are arranged when people set timetables for trains and buses.

Next time you are at a bus stop, take a look at what people do while they are waiting for the next
bus. If a bus has just departed, people stand around and perhaps talk to each other for a while. Then,
the operant of “looking for the bus” begins at a low rate of response. As the interval times out, the
rate of looking for the bus increases and most passengers are now looking for the arrival of the next
bus. The passengers’ behavior approximates the scalloping pattern we have described in this section.
Schedules of reinforcement are a pervasive aspect of human behavior, but we seldom recognize the
effects of these contingencies.

On a variable-interval VI

changes but the average time is 30 s. The symbol V indicates that the time requirement varies from
one reinforcer to the next. The average amount of time
the schedule.

Interval contingencies are common in the ordinary
world of people and other animals. For example, people

boiling egg, and are put on hold. In everyday life, variable

ing in line to get to a bank teller may take 5 min one day
and half an hour the next time you go to the bank. A wolf
pack may run down prey following a long or short hunt.
A baby may cry for 5 s, 2 min, or 15 min before a parent
picks up the child. A cat waits varying amounts of time in
ambush before a bird becomes a meal. Waiting for a bus

given time before leaving. A carpool is an example of
a VI contingency with a limited hold. The car arrives


oratory, this limited-hold contingency—where the
reinforcer is available for a set time after a variable
interval—when added to a VI schedule increases the
rate of responding by reinforcing short interresponse

VI schedule with limited hold are ready for pick-up
and rush out of the door when the car arrives.

Figure 5.17 portrays the pattern of response generated on a VI schedule. On this schedule, rate
of response is moderate and steady. The pause after reinforcement that occurs on FI does not usually
appear in the VI record. Notably, this steady rate of response is maintained during extinction. Ferster

response remains steady and moderate to high, VI performance is often used as a baseline for evaluat-
ing other independent variables. Rate of response on VI schedules may increase or decrease as a result
of experimental manipulations. For example, tranquilizing drugs such as chlorpromazine decrease the

An ideal baseline would be one in which there is as little interference as possible from other vari-
ables. There should be a minimal number of factors tending to oppose any shift in behavior that
might result from experimental manipulation. A variable-interval schedule, if skillfully programmed,
comes close to meeting this requirement.

In summary, VI contingencies are common in everyday life. These schedules generate a mod-
erate steady rate of response, which is resistant to extinction. Because of this characteristic pattern,
VI performance is frequently used as a baseline to assess the effects of other variables, especially
performance-altering drugs.

The major independent variable in operant conditioning is the program for delivering consequences,
called the schedule of reinforcement. Regardless of the species, the shape of the response curve for a

run, and other patterns were observed in a variety of organisms and were highly uniform and regular

of schedule effects has been extended to the phenomenon of biofeedback and the apparent willful
control of physiological processes and bodily states.

Biofeedback usually is viewed as conscious, intentional control of bodily functions, such as
brainwaves, heart rate, blood pressure, temperature, headaches, and migraines—using instruments
that provide information or feedback about the ongoing activity of these systems. An alternative
view is that biofeedback involves operant responses of bodily systems regulated by consequences,
producing orderly changes related to the schedule of “feedback.”

to the underside of the forearm to measure electrical activity produced by muscles while partic-
ipants squeezed an exercise ball. They were instructed to contract their arm “in a certain way”
to activate a tone and light; thus, their job was to produce the most tone/light presentations they
could. Participants were randomly assigned to groups that differed in the schedule of feedback

sessions are run with the same schedule until some standard of stability is reached. In this applied
experiment, however, 15-min sessions were conducted on three consecutive days with a 15-min
extinction session added at the end.

Cumulative records were not collected to depict response patterns, presumably because the
length and number of sessions did not allow for stable response patterns to develop. Instead,


cle pumping action of the exercise ball.


cating the operant function of electrical activity in the forearm muscles. Together with studies of bio-

that responses of the somatic nervous system also are under tight operant control of the schedule

responses clearly are warranted, but have been lacking in recent years. In this regard, we recommend
the use of steady-state, single-subject designs that vary the interval or ratio schedule value over a wide
range to help clarify how schedules of feedback regulate seemingly automatic bodily activity.

On a progressive-ratio (PR) schedule of reinforcement, the ratio requirements for reinforcement

Once the animal emits 5 responses resulting in reinforcement, the next ratio requirement might

give the schedule its name. At some point in the progression of ratios, the animal fails to achieve the
requirement. The highest ratio value completed on the PR schedule is designated the breakpoint.

The type of progression on a PR schedule may be arithmetic, as when the difference between
two ratio requirements is a constant value such as 10 responses. Another kind of progression is geo-

and geometric PR schedules increased as the ratio requirement progressed and then at some point

arithmetic PR schedules decreased in a linear manner—as the ratio size increased, there was a linear
decrease in response rates. Responses rates on geometric PR schedules, however, showed a negative
deceleration toward a low and stable response rate—as ratio size increased geometrically, response
rates rapidly declined and then leveled off. Thus, the relationship between response rates and ratio
requirements of the PR schedule depends on the type of progression—arithmetic or geometric.
These relationships can be described by mathematical equations, and this is an ongoing area of

on PR schedules uses the giving-up or breakpoint as a way of measuring or
effectiveness, especially of drugs like cocaine. The breakpoint for a drug indicates how much oper-
ant behavior the drug will sustain at a given dose. For example, a rat might self-administer morphine
on a PR schedule as the dose size is varied and breakpoints are determined for each dose size. It is

assessing the drugs’ relative reinforcement effectiveness. In these tests, it is important to recognize

about which drugs are more “addictive” and how the breakpoint varies with increases in drug dose

Progressive-ratio schedules
allow researchers to assess the rein-
forcing effects of drugs prescribed
to control problem behavior. A drug
prescribed to control hyperactivity
might also be addictive—an effect
recommending against its use. The
drug Ritalin®

and is chemically related to Dexe-
drine® –
amine is a drug of abuse, as are other
stimulants, such as cocaine. Thus,
people who are given methylpheni-
date to modify ADHD might develop
addictive behavior similar to behav-
ior maintained by amphetamine.
In one study, human drug-abusing
volunteers were used to study the

requirement resulted in oral self-administration of the drug. Additional monetary contingencies
were arranged to ensure continued participation in the study. As shown in Figure 5.18, the results
indicated that the number of responses to the breakpoint increased at the intermediate dose of meth-
ylphenidate and d-amphetamine compared with the placebo control. Thus, at intermediate doses
methylphenidate is similar in reinforcement effectiveness to d-amphetamine. One conclusion is that
using Ritalin® to treat ADHD may be contraindicated due to its potential for abuse, and interventions
based on increasing behavioral momentum may be a preferred strategy, as previously noted in this

on PR schedules; also Bolin, Reynolds, Stoops, & Rush, 2013 provide an assessment of d-amphet-


a PR3 schedule, requiring an increase or step of
3 responses for each pellet, using a linear pro-

highest ratio completed within a 15-min period

schedule. After establishing the PR3 baselines,
both obese-prone and lean-prone mice were
administered low and high doses of an anorexic

schedule. The results for breakpoints showed

did not reliably differ from the lean-prone mice.

similar for both genotypes. Also, the anorexic
drug reduced PR3 breakpoints in a dose–response manner, but this effect did not differ by genotype

One problem with this conclusion is that animals were only given 15 min to complete the ratio
requirement, and some animals did not achieve stable baselines on the PR3 schedule, even after

distance traveled or covered in a day. Viewed as behavior, traveling for food is an operant controlled
by its consequences—the allocation of food arranged by the location or patch. In the laboratory, a

schedule, increasingly more work or effort is required to obtain the same daily food ration.
Typical operant PR experiments are conducted in an open economy where animals receive bits

experimental session to maintain adequate body weight. To model the problem of travel for food in
the wild, a closed economy is used where animals that meet the behavioral requirements receive all

especially for food consumption and maintenance of body weight.
A novel experiment by a group of biologists from Brazil and England arranged a variation on

the distance required to maintain free-feeding levels was increased above the distance set for the
previous 3 days. The NON group was housed and treated identically to the CON group, but food

During a baseline period, all rats were given 3 days of free food and access to running wheels.
The animals consumed on average 24 g of food a day for an average consumption rate of 1 g per hour.
On average, rats ran 1320 m/day in the wheels during the baseline phase. The next phase involved
arranging a PR schedule for the rats in the CON group. To obtain the initial PR value, the 1320 m of
baseline wheel running was divided by 24 g of food, yielding 1 g of food for each 55 m. A program-

PR value again remained in effect for 3 days at which point the distance requirement increased again


tingencies based on Fonseca
et al., 2014 and personal
communication from Dr.
Robert Young, the English

Figure 5.20 shows the

eled for food for each 3 days
of the experiment by the

groups. The line joining the
grey triangles depicts the
increasing distance on the
PR schedule for animals to
obtain their daily free-feed-
ing level of food, six 4-g
pellets. For the rats in the
NON group, the distance
traveled each day is low
and constant, about 1300 m
on average, as in baseline.
These rats maintained daily
food intake at about 24 g or

and showed increasing body

scheduled distance.
Although distance traveled matched the early PR values and rats received the six 4-g pellets,

wheel running on the PR schedule. Following this initial drop, food consumed partially recovered;

rats’ average distance traveled no longer approximated the PR value—even though rats did complete

than required by the PR value, giving up some of the daily food ration that they could have obtained.
One possibility is that the animals were sensitive to energy balance or homeostasis—balancing as
best as possible energy expenditure by wheel running with energy intake from food consumption.
In fact, body weight initially increased, but then leveled off and decreased as distance traveled fell
considerably off the PR requirement and food availability decreased. At PR values between 8000 m

plummeted.
The PR schedule and closed economy used in this study generated a severe energy imbalance,

which ultimately would result in eventual death of the animal. Other research addressed in this
textbook shows that rats develop activity anorexia when faced with a restricted food supply and free
access to running wheels. The animals run more and more, eat less at each meal, and die of self-star-

starve under these conditions—demonstrating how food reinforcement contingencies may induce
life-threatening, non-homeostatic behavior.

We have described typical performances generated by different schedules of reinforcement. The
patterns of response on these schedules take a relatively long time to develop. Once behavior has
stabilized, showing little change from day to day, the organism’s behavior is said to have reached a
steady state. The break-and-run pattern that develops on FR schedules is a steady-state performance
and is only observed after an animal has considerable exposure to the contingencies. Similarly, the
steady-state performance generated on other intermittent schedules takes time to develop. When
an organism is initially placed on any schedule of reinforcement, typical behavior patterns are not
consistent or regular. This early performance on a schedule is called a transition state. Transition

Consider how you might get an animal to press a lever 100 times for each presentation of food

steady-state performance is established on CRF, you are faced with the problem of how to program
the steps from CRF to FR 100. Notice that in this transition there is a large shift or step in the ratio

of reinforcement to bar pressing. This problem has been studied using a progressive-ratio schedule,
as we described earlier in this chapter. The ratio of responses following each run to reinforcement is


tigate the behavioral effects of step size and criteria for stability. If you simply move from CRF to
the large FR value, the animal will probably show ratio strain in the sense that it pauses longer
and longer after reinforcement. One reason is that the time between successive reinforcements con-

interreinforcement
interval
and is controlled by it, the animal eventually stops responding. Thus, there is a negative feedback
loop between increasing PRP length and the time between reinforcements in the shift from CRF to
the large FR schedule.

Transitions from one schedule to another play an important role in human development. Devel-
opmental psychologists have described periods of life in which major changes in behavior typically
occur. One of the most important life stages in Western society is the transition from childhood to
adolescence. Although this phase involves many biological and behavioral processes, one of the
most basic changes involves schedules of reinforcement.

When a youngster reaches puberty, parents, teachers, peers, and others require more behav-
ior and more skillful performance than they did during childhood. A young child’s reinforcement
schedules are usually simple, regular, and immediate. In childhood, food is given when the child
says “Mom, I’m hungry” after playing a game of tag, or is scheduled at regular times throughout the

the refrigerator, open packages and cans, sometimes cook, get out plates, eat the food, and clean up.
Of course, any part of this sequence may or may not occur depending on the disciplinary practices
of the parents. Although most adolescents adapt to this transition state, others may show signs of
ratio strain
to intermittent reinforcement.

Many other behavioral changes may occur during the transition from childhood to adolescence.

in adolescence:

With adolescence, the picture may change quite drastically and sometimes even suddenly. Now

The adolescent may have to take a job demanding a substantial amount of work for the money, which
heretofore he received as a free allowance. Furthermore, he now needs more money than when
he was younger to interact with people he deals with. A car or a motorcycle takes the place of the
bicycle. Even the price of services such as movies and buses is higher. Money, particularly for boys,
frequently becomes a necessary condition for dealing with the opposite sex. The amount of work
required in school increases. Instead of simple arithmetic problems, the adolescent may now have to

will require much trial and error.

There are other periods of life in which our culture demands large shifts in schedules of rein-

forced or elected retirement. In terms of schedules, retirement is a large and rapid change in the

work-related consequences. For example, a professor who has enjoyed an academic career is no
longer reinforced for research and teaching by the university community. Social consequences for
these activities may have included approval by colleagues, academic advancement and income, the

interest of students, and intellectual discussions. Upon retirement, the rate of social reinforcement
is reduced or completely eliminated. It is, therefore, not surprising that retirement is an unhappy
time of life for many people. Although retirement is commonly viewed as a problem of old age,

As we have seen, the use of drugs is operant behavior maintained in part by the reinforcing
effects of the drug. One implication of this analysis is that reinforcement of an incompatible
response (i.e., abstinence) can reduce the probability of taking drugs. The effectiveness of an
abstinence contingency depends on the magnitude and schedule of reinforcement for nondrug
use (e.g., Higgins, Bickel, & Hughes, 1994).

In applied behavior analysis, contingency management involves the systematic use
of reinforcement to establish desired behavior and the withholding of reinforcement or
punishment of undesired behavior (Higgins & Petry, 1999). An example of contingency
management is seen in a study using reinforcement schedules to reduce cigarette smoking.
Roll, Higgins, and Badger (1996) assessed the effectiveness of three different schedules of
reinforcement for promoting and sustaining drug abstinence. These researchers conducted
an experimental analysis of cigarette smoking because cigarettes can function as reinforce-
ment, smoking can be reduced by reinforcement of alternative responses, and it is relatively
more convenient to study cigarette smoking than illicit drugs. Furthermore, cigarette smok-
ers usually relapse within several days following abstinence. This suggests that reinforce-
ment factors regulating abstinence exert their effects shortly after the person stops smoking
and it is possible to study these factors in a short-duration experiment.

Sixty adults, who smoked between 10 and 50 cigarettes a day, took part in the experi-
ment. The smokers were not currently trying to give up cigarettes. Participants were randomly

control group. They were told to begin abstaining from cigarettes on Friday evening so that
they could pass a carbon monoxide (CO) test for abstinence on Monday morning. Each per-
son in the study went for at least 2 days without smoking before reinforcement for abstinence
began. On Monday through Friday, participants agreed to take three daily CO tests. These
tests could detect prior smoking.

Twenty participants were randomly assigned to the progressive reinforcement group. The
progressive schedule involved increasing the magnitude of reinforcement for remaining drug

Each subsequent consecutive CO sample that indicated abstinence increased the amount of
money participants received by $0.50. The third consecutive CO test passed earned a bonus

yielded $3.50, passing the third test yielded $14.00 ($4.00 and bonus of $10.00), and passing
the fourth test yielded $4.50. In addition, a substantial response cost was added for failing a
CO test. If the person failed the test, the payment for that test was withheld and the value of
payment for the next test was reset to $3.00. Three consecutive CO tests indicating abstinence
following a reset returned the payment schedule to the value at which the reset occurred (Roll
et al., 1996, p. 497), supporting efforts to achieve abstinence.

test. There were no bonus points for consecutive abstinences and no resets. The total amount of

for reinforcement. The schedule of payment for the control group was the same as the average

people, the payment was given no matter what their carbon monoxide levels were. The control
group was, however, asked to try to cut their cigarette consumption, reduce CO levels, and
maintain abstinence.

abstinence tests, while the control group only passed about 40% of the tests. The effects of

of participants who passed three consecutive tests for abstinence and then resumed smoking
over the 5 days of the experiment. Only 22% of those on the progressive schedule resumed

progressive schedule of reinforcement was superior in terms of preventing the resumption of
smoking (after a period of abstinence).

Figure 5.21B shows the percentage of smokers who gave up cigarettes throughout the
experiment. Again, a strong effect of schedule of reinforcement is apparent. Around 50% of
those on the progressive reinforcement schedule remained abstinent for the 5 days of the

In a subsequent experiment, Roll and Higgins (2000) found that a progressive reinforce-
ment schedule with a response–cost contingency increased abstinence from cigarette use

schedule. Overall, these results indicate that a progressive reinforcement schedule, combined

with an escalating response cost, is an effective short-term intervention for abstinence from
with an escalating response cost, is an effective short-term intervention for abstinence from
smoking. Further research is necessary to see whether a progressive schedule maintains
abstinence after the schedule is withdrawn. Long-term follow-up studies of progressive and
other schedules are necessary to assess the lasting effects of reinforcement schedules on absti-
nence. What is clear, at this point, is that schedules of reinforcement may be an important
component of stop-smoking programs (see more on contingency management in Chapter 13).

A schedule of reinforcement describes the arrangement of discriminative stimuli, operants, and con-

understanding of behavior regulation in humans and other animals. The research on schedules and
performance patterns is a major component of the science of behavior, a science that progressively
builds on previous experiments and theoretical analysis. Schedules of reinforcement generate con-
sistent, steady-state performances involving runs of responses and pausing that are characteristic

schedules can serve as an animal model of foraging in the wild, and intermittent reinforcement plays
a role in most human behavior, especially social interaction.

To improve the description of schedules as contingencies of reinforcement, we have intro-
duced the Mechner system of notation. This notation is useful for programming contingencies in
the laboratory or analyzing complex environment–behavior relations. In this chapter, we described

Adult humans have not shown classic scalloping or break-and-run patterns on FI schedules, and
the performance differences of humans relate to language or verbal behavior as well as histories of

and higher overall rates of response. Adding a limited hold to a VI schedule increases the response

schedules, the higher the rate of reinforcement the greater the behavioral momentum.
The study of behavior during the transition between schedules of reinforcement has not been

well researched, due to the boundary problem of steady-state behavior. Transition states, however,
play an important role in human behavior—as in the shift in the reinforcement contingencies from
childhood to adolescence or the change in schedules from employment to retirement. Reinforce-
ment schedules also have applied importance, and research shows that cigarette smoking can
be regulated by a progressive schedule combined with an escalating response–cost contingency.
Finally, in the Advanced Section of this chapter, we addressed molecular and molar accounts of
response rate and rate differences on schedules of reinforcement. We emphasized the analysis of
IRTs for molecular accounts, and the correlation of overall rates of response and reinforcement
for molar explanations.

www.thefuntheory.com/

physical activity, and cleaning up litter. See if you can think up new ways to use reinforcement
schedules in programming fun to regulate important forms of human behavior in our culture.

www.youtube.com/watch?v=I_ctJqjlrHA This YouTube video discusses basic schedules of rein-
forcement, and B. F. Skinner comments on variable-ratio schedules, gambling, and the belief
in free will.

www.pigeon.psy.tufts.edu/eam/eam2.html This module is available for purchase and demon-
strates basic schedules of reinforcement as employed in a variety of operant and discrimination
procedures involving animals and humans.

http://opensiuc.lib.siu.edu/cgi/viewcontent.cgi?article=1255&context=tpr&sei-redir=1-
search=“conjugate schedule reinforcement” A review of the impact of Ferster and Skinner’s
publication of Schedules of Reinforcement
schedules to the operant analysis of choice, behavioral pharmacology, and microeconomics
of gambling. Contingency detection and causal reasoning by infants, children, and adults are

www.wadsworth.com/psychology_d/templates/student_resources/0534633609_sniffy2/sniffy/
download.htm If you want to try out shaping and basic schedules with Sniffy the virtual rat, go
to this site and use a free download for 2 weeks of fun. After this period, you will have to pay to
continue your investigation of operant conditioning and schedules of reinforcement.

2. Infrequent reinforcement generates responding that is persistent. What is this called?

3. Mechner notation describes:

4. Resurgence happens when:

5. Schedules that generate predictable stair-step patterns are:

7. Schedules that combine time and responses are called:

8. The shape of the response pattern generated by an FI is called a:

9. Human performance on FI differs from animal data due to:

10. Behavior is said to be in transition when it is between:

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP