
Abstract
Prompt dependency is an often referenced but little studied problem. The current study evaluated 2 iterations of differential reinforcement (DR) for overcoming prompt dependency and facilitating skill acquisition with 4 individuals who had been diagnosed with an autism spectrum disorder (ASD). Preference and reinforcer assessments were conducted to determine moderately and highly preferred reinforcers for each participant. Three sets of word–picture relations were taught to each of the participants using 1 of 3 DR procedures. Reinforcement for independent responses entailed delivery of the highest preference stimulus across all 3 procedures. Consequences for prompted responses entailed delivery of the highest preference stimulus (no DR), delivery of the moderately preferred stimulus (DR high/moderate), or no delivery of reinforcers (DR high/extinction). Results indicated that the DR high/moderate condition was most effective for 3 of 4 participants, whereas the DR high/extinction condition was most effective for the remaining participant.
Key words: autism, differential reinforcement, prompt dependency, discrimination
We thank the New England Center for Children (NECC) and Northeastern University for their contributions to the field of applied behavior analysis. We also extend appreciation to Amy Constantine and Pamela Sinclair for their help with data collection. Catia Cividini-Motta and William H. Ahearn are now at Western New England University as well as NECC. Correspondence concerning this article should be addressed to Catia Cividini-Motta, The New England Center for Children, 33 Turnpike Road, Southborough, Massachusetts 01772 (e-mail: ccividini@necc.org). doi: 10.1002/jaba.67
Prompt dependency is an often referenced problem encountered in the education of persons with disabilities (Oppenheimer, Saunders, & Spradlin, 1993). Prompts and prompt fading are common teaching strategies used for producing discriminative repertoires while striving to minimize errors (e.g., Fisher, Kodak, & Moore, 2007). However, for certain individuals, attempts to fade response prompts are unsuccessful, and correct responses are emitted only when the controlling prompt is presented. This phenomenon was defined by Clark and Green (2004) as prompt dependency. Applied research examining procedures for decreasing and preventing the development of prompt dependency is limited.
One variable that may affect prompt dependency is reinforcement. During skill acquisition, differential reinforcement is often in place for correct and incorrect responses such that both responses receive reinforcement while incorrect responses are placed on extinction. Providing the same reinforcer for both correct prompted and correct independent responses, as is common in errorless teaching procedures, may produce prompt dependency. Olenick and Pear (1980) and Touchette and Howard (1984) used differential reinforcement, either alone or in combination with other procedures, to teach tacts and auditory–visual discriminations, respectively. Both studies found that the providing more dense reinforcement schedules for unprompted responses than for prompted responses enhanced acquisition.
In a recent evaluation of the effects of differential reinforcement on skill acquisition, Karsten and Carr (2009) manipulated the quality of reinforcers rather than the rate of reinforcement, as in Olenick and Pear (1980) and reinforcement was more efficient for producing mastery in the few cases in which results were differentiated.
Taken together, these studies suggest that differential reinforcement might facilitate skill acquisition. However, prior research has not demonstrated the degree of differential reinforcement necessary to produce mastery in participants with a reported history of prompt dependency. Therefore, the purpose of this study was to extend the research of Karsten and Carr (2009) by investigating the effects of two iterations of differential reinforcement: high-preference reinforcers versus moderate-preference reinforcers, and high-preference reinforcers versus extinction.
METHOD
Participants
Participants were four individuals who lived in a residential facility for persons with autism spectrum disorders (ASD) or related disabilities and had been diagnosed with an ASD. The participants did not have a history with respect to differential reinforcement procedures in the context of skill acquisition. Eddie was a 16year-old boy who had been attending the residential facility for 10 years. He communicated using a voice-output device and manual signs. He followed two-step directives and had recently been diagnosed with a seizure disorder. Bill was a 12-year-old boy who had been attending the facility for 8 years. He communicated vocally as well as through the use of manual signs. Lucas was a 13-year-old boy who had been at the facility for 3 years. He communicated through the use of a voice-output device. Frank was a 38-year-old man who had resided at the residential facility for 23 years. Bill, Eddie, Lucas, and Frank followed multistep directives and, at the time of this study, they were not receiving any psychotropic medication.
All participants met an inclusion criterion based on demonstration of prompt dependency. First, each student was nominated by their clinical care providers as demonstrating prompt dependency. Next, an experimenter conducted at least two observations in which the participant completed a match-to-sample discrimination task. Trial-by-trial data were collected on whether the participant waited for the teacher prompt, which was delayed up to 10s. Students were selected as potential participants if they waited for the prompt on at least 80% of the trials across two nine-trial sessions. In addition, we reviewed the participants’ progress on the objectives of their individual education plans. If data indicated that the potential participant moved quickly through the prompting hierarchy of a learning objective, he or she was excluded from the study.
Setting and Materials
All sessions for Eddie, Bill, and Lucas during the pretraining phase were conducted in a room (1.5m by 3m) adjacent to their classroom. The room was equipped with a one-way panel, a table, two chairs, and a video camera. Pretraining sessions were conducted once or twice per week with each participant. Training sessions were conducted in the participants’ classroom or at their residence. All sessions of the pretraining and training phases for Frank were conducted in a room at his residence. Training sessions were conducted one to three times per day, typically 4 to 5 days per week, based on the participants’ availability.
Materials included tokens (poker chips) and a token board; colored construction papers that were associated with the different conditions during the pretraining assessments; pictures of a poker chip and a smiley face (3.8 cm by 3.8 cm); a red square (3.8cm by 3.8cm); data sheets; and a timer. Materials for the training phase included pictures (3.8 cm by 3.8 cm) of items commonly found in the participants’ environment and their corresponding Portuguese sight words; three-choice array data sheets; a three-stimulus presentation board; slant boards; preferred edible items; timers; pictures of a smiley face and of a poker chip; and a red square. Two different-colored circles were included in the pretraining assessments for Eddie, Bill, and Lucas because the free-operant response was target touching. For Frank, the free-operant response was shirt folding because he continued to engage in target touching when reinforcers were withheld.
Response Measurement
The dependent variables were the rate of responding per condition during the reinforcer assessment and the percentage of independent responses per session in the training phase. During the reinforcer assessments, data were collected on frequency of target touching (or shirts folded) per assessment component and the number of times each initial link was selected during the concurrent-chains reinforcer assessment. Initial-link selection was defined as the first contact of the participant’s hand with one of the items presented on the table. Target touching was defined as the participant making open-hand contact in an alternating manner (single hand contact with one and then the other target) with two different-colored targets placed on the table in front of him. Each alternation was scored as a single response, and repetitive contact with the same target was ignored. Shirt folding was defined as grabbing a shirt from the pile, laying it on the table, folding the shirt by matching the corners, and then placing it with the folded shirts. During the training phase, data were collected on the number of independent and prompted responses and the number of sessions to mastery. Independent responses were defined as any response emitted prior to the teacher prompt. If the step prescribed was a 2-s delay with manual guidance at the forearm, an independent response occurred when the participant touched one of the comparison stimuli before the 2-s delay. Prompted responses were defined as any response emitted following the teacher prompt. Errors were defined as the participant touching the incorrect comparison stimulus either before or after receiving a prompt.
Interobserver Agreement and Procedural Integrity
A second observer independently recorded data on the target responses. Interobserver agreement for the reinforcer assessment was calculated by dividing each session into 10-s intervals, calculating agreement scores for each interval, and then averaging these scores across the total number of intervals for each session. Agreement scores were calculated by dividing the smaller count by the larger count and converting the result to a percentage. Interobserver agreement for the concurrent-chains reinforcer assessment was calculated on a trial-by-trial basis. The total number of agreements was divided by the number of agreements plus disagreements and the result was converted to a percentage. Interobserver agreement was collected for over 33% of the sessions across both assessments. Mean agreement scores for the reinforcer assessment were 95% (range, 90% to 100%) for Eddie, 93% (range, 90% to 97%) for Bill, 96% (range, 92% to 100%) for Lucas, and 100% for Frank. Mean agreement scores for the concurrent-chains assessment were 98% (range, 97% to 100%) for Eddie, 99% (range, 98% to 100%) for Bill, 97% for Lucas (range, 94% to 100%), and 100% for Frank.
A second observer independently collected data during over 33% of sessions across training conditions. Interobserver agreement was calculated on a trial-by-trial basis. The total number of agreements was divided by the number of agreements plus disagreements and the result was converted to a percentage. Mean agreement scores for the training phase were 99% (range, 98% to 100%) for Eddie and Bill, 96% (range, 94% to 100%) for Lucas, and 97% (range, 95% to 100%) for Frank.
Procedural integrity data were collected during the training phase to insure that the teaching procedures were implemented as described in the protocol. Teacher performance was evaluated on whether the appropriate sample and comparison stimuli were presented as prescribed, whether the comparison stimuli were presented after the participant touched the sample stimulus, whether the prompt provided by the teacher corresponded to the prescribed prompt, and whether data were recorded after trial completion. Data were also collected on whether the materials necessary for the prescribed reinforcement condition were available. A procedural integrity score was calculated for each session by dividing the number of correctly implemented trials by the total number of trials and converting the result to a percentage. The mean procedural integrity scores were 99.7% (range, 97% to 100%) for Eddie, 99% (range, 96% to 100%) for Bill, 95% (range, 93% to 100%) for Lucas, and 94% (range, 92% 100%) for Frank.
Procedure
The two iterations of differential reinforcement compared in this study required the identification of reinforcers of various potency for each of the participants. Therefore, a succession of reinforcer assessments was completed.
Reinforcer Assessment 1: Multiple schedule. Reinforcer assessment procedures were based on those of Smaby, MacDonald, Ahearn, and Dube (2007). The multiple-schedule assessment investigated whether praise, token (or edible item), and token (or edible item) plus praise functioned as reinforcers. In the multiple-schedule assessments, an extinction component alternated with a reinforcement component a total of three times in the component sequence. The extinction component always preceded a reinforcement component. Each component was correlated with a colored sheet of construction paper to facilitate discrimination between conditions. Colors were randomly assigned to each component but remained the same across all participants and assessments. The frequency of multiple-schedule sessions varied across days and participants, but a maximum of one full sequence was completed per day (i.e., extinction, token, extinction, praise, extinction, token plus praise).
In both extinction and reinforcement components, two different-colored circles (or a pile of unfolded shirts) were placed on the table in front of the participant and were the targets that the participant touched during the free-operant response, target touching (see definition under description of dependent variables). Frank’s target response was shirt folding. Before starting the session, the teacher completed a forced exposure trial. Specifically, he or she stated the name of the color associated with each component (e.g., “red” for extinction) and manually guided the participant to complete the target response (e.g., target touching) six times. The programmed consequence associated with that component was provided after each guided response. Before beginning the component, the teacher then stated the color again and started the timer. Extinction sessions lasted 5 min or until the participant stopped responding for 1 min, whichever came first. Only responding that occurred in the last minute of the extinction component was scored to ensure that any possible extinction burst observed at the beginning of the component did not interfere with the results of the assessment. Reinforcer sessions lasted 1 min, with the programmed consequence provided after each target response. For the praise condition, the teacher delivered short statements (e.g., “great job, Eddie”) contingent on each response. For the token or edible condition, the teacher delivered a token contingent on each target response. When the participant had earned all six tokens, the teacher stopped the timer and prompted the participant to trade in his tokens for an edible item. Once the edible item was consumed, the timer started again. For Frank, this condition was modified because he did not have a token economy as part of his daily program. Therefore, instead of earning a token for each response, Frank earned a small piece of a highly preferred food for each target response. For the token or edible item plus praise condition, procedures were identical to the token or edible item condition except that the teacher delivered praise contingent on each target response and on delivery of the trade-in items or edible items (Frank only).
Reinforcer Assessment 2: Concurrent chains. A concurrent-chains preference assessment was conducted to assess participants’ relative preferences for the reinforcers. The reinforcing efficacy of each stimulus also was determined by comparing the response rate during each reinforcer component to the response rate during the last minute of the previous extinction component. Preference was determined by calculating the percentage of trials each stimulus was selected. During this assessment, four small clear plastic bins were placed on the table in front of the participant. Each bin was placed upside down over the discriminative stimuli associated with each condition. For Frank, we used the same colored pieces of construction paper as those in the previous assessment. For Eddie, Bill, and Lucas, the bin associated with the token condition contained a picture of the token and the bin associated with praise contained a smiley face. The bin associated with the token plus praise contained both a picture of a token and a picture of a smiley face, but both pictures were taped together so that the participant could select them at the same time. The bin associated with extinction contained a red square.
Before beginning the assessment, each participant was exposed to a pretraining session consisting of 40 exposure trials (10 for each stimuli) during which the teacher manually guided the participant to engage in the target response. The teacher also conducted one exposure trial for each initial link before the assessment session. During the session, the teacher said “choose” at the beginning of each trial, provided the consequence associated with the selected bin, and initiated another trial after delivering the reinforcer. Each session consisted of 20 trials, and placement of the bins was rotated after each trial. One session was completed with each participant. For Eddie, the assessment was repeated because of the lack of response differentiation.
Training. The teacher taught the participants to match printed words to their corresponding picture using a match-to-sample procedure. Stimuli were nine printed Portuguese words (Table 1). Portuguese words were selected to eliminate the possibility of participants’ previous experience with these stimuli. Each set of three words were randomly assigned to one of three reinforcement conditions. The first author and a master’s-level behavior analyst selected words of similar length and letter sequence to promote similar task difficulty across conditions. On the table in front of the participant were a slant board and the discriminative stimuli associated with the relevant reinforcement condition (the same as in the reinforcer assessment). At the beginning of each trial, the teacher showed the participant the sample word and said “match.” After the participant emitted the required observing response (e.g., touch the printed word), the comparison stimuli (pictures of the items described by the printed words) were randomly presented on a three-stimulus array slant board in front of the participant. The teacher followed the prompting procedure as prescribed in the beginning of the session: Step 1 consisted of immediate full manual guidance; Step 2 was a 2-s delay with manual guidance at the forearm; Step 3 was a 2-s delay with manual guidance at the upper arm; Step 4 was a 2-s delay with light touch; and Step 5 was no prompts. If the participant touched the correct comparison, the teacher delivered the programmed consequence and then recorded the data. If the participant touched the incorrect comparison, the teacher removed the comparison stimuli from the slant board and then recorded data. An error-correction procedure (e.g., prompting the correct response) was not included to rule out possible avoidance of manual guidance. Attempts by the participant to touch additional comparison stimuli were blocked or ignored. The programmed consequence followed prompted responses until the participant emitted the first independent response.
Table 1
Printed Words Used During the Training Phase
| Participant | Condition | Stimuli |
| Bill | No DR | Bolsa, cama, meia |
| DR high/mod | Bolo, carro, melao | |
| DR high/ext | Bone, calca, medalia | |
| Eddie | No DR | Bolo, carro, melao |
| DR high/mod | Bone, calca, medalia | |
| DR high/ext | Bola, casa, mesa | |
| Lucas | No DR | Bolo, carro, melao |
| DR high/mod | Bolsa, cama, meia | |
| DR high/ext | Bone, calca, medalia | |
| Frank | No DR | Bolsa, cama, meia |
| DR high/mod | Bola, casa, mesa | |
| DR high/ext | Bolo, carro, melao |
Note. Each set of stimulus was randomly assigned to one of the reinforcement conditions for each of the participants.
Sessions consisted of nine trials unless the participant met the criterion to discontinue a session. The criterion to increase a step at the end of the session was seven of nine correct responses, and the criterion to discontinue a session and begin another session at the previous prompt step was two consecutive errors or three errors in the same session. Mastery criterion originally was set at two consecutive sessions with at least eight of nine correct and independent responding (see Eddie, Set 1). We then established a more stringent mastery criterion of three consecutive sessions at eight of nine correct or above (Eddie, Lucas, and Bill) to insure that acquisition had occurred. The modification was deemed necessary when Eddie’s performance with Set 2 decreased after he met the original mastery criterion. For Frank, an even more stringent criterion was selected (four sessions with at least eight of nine correct) due to an abrupt increase in independent and correct responding. If, after meeting the mastery criterion for one set of words, the participant had not made significant progress with the remaining sets, the reinforcement program for one unmastered set was changed to the most effective reinforcement program. This procedure was repeated with the remaining set if acquisition had not occurred when the mastery criterion was met for the second set of words. We used an adapted alternating treatments design (Sindelar, Rosenberg, & Wilson, 1985) to compare responding under the three reinforcement conditions.
No differential reinforcement (no DR). The most potent and preferred reinforcer was delivered contingent on prompted and independent responses (token plus praise for Eddie, Bill, and Lucas). For Frank, the most potent reinforcer was edible item plus praise; however, he always chose the edible item only during the concurrent-chains preference assessment. Therefore, an edible item was delivered for prompted and independent responses in the no-DR condition.
Differential Reinforcement 1 (high/mod). A moderately potent and preferred reinforcer (praise) was delivered contingent on prompted responses, and the most potent and preferred reinforcer (praise plus token for Eddie, Lucas, and Bill; edible item for Frank) was delivered contingent on independent responses.
Differential Reinforcement 2 (high/ext). No reinforcement was delivered contingent on prompted responses, and the most potent and preferred reinforcer (praise plus token for Eddie, Lucas, and Bill; edible item for Frank) was delivered contingent on independent responses.
Results
Figures 1, 2, and 3 show the results of the multiple-schedule and concurrent-chains assessments for all four participants. Data depicted on the top panel of Figure 1 indicate that the response frequency for Eddie was highest in the token plus praise component compared to the other components. For Bill, response frequency was highest during the token alone component. For Lucas response frequency was the same across components. Responding was highest in the edible item plus praise condition for Frank. These data suggest that all three stimuli functioned as reinforcers. Figure 2 displays the results of the concurrent-chains assessment for Bill, Frank, and Lucas. Lucas and Bill showed a preference for the token plus praise condition, whereas Frank chose the edible condition at every opportunity. Results for Eddie’s concurrent-chains preference assessment are presented in Figure 3. Eddie chose each condition a similar number of times, suggesting that his responding was not under discriminative control. Therefore, we completed another session of the multiple-schedule assessment, and the results are shown in the bottom panel of Figure 3. Similar to the previous assessment (Figure 1), response frequency was higher under the token plus praise condition.

Figure 1. Response frequency during the multiple-schedule assessment of token, praise, and token and praise for Eddie, Bill, and Lucas, and edible item, praise, and edible item and praise for Frank.

Figure 2. Percentage of selections for Bill and Lucas (top panels) and Frank (bottom panel) during the concurrent-chains assessment.

Figure 3. Percentage of selections during the initial concurrent-chains assessments for Eddie (top panels) and response frequency during the additional multiple-schedule assessment for Eddie (bottom panel).
Figures 4, 5, 6, and 7 show the results of training for each participant. Data are displayed in a multiple baseline design across behaviors to facilitate visual inspection. Eddie (Figure 4) mastered the set of stimuli taught using the DR high/mod procedure more rapidly than the set of stimuli taught using other procedures. In addition, Eddie made immediate progress on the remaining sets after the reinforcement procedures were changed to DR high/mod. Similar results were obtained for Bill (Figure 5). Similarly, Lucas (Figure 6) met the mastery criterion for the DR high/mod condition first. When the teacher switched to the DR high/mod procedure for the stimuli originally associated with the DR high/ext condition, Lucas met the mastery criterion under the no-DR condition. These results suggest that the effects of the DR high/mod condition generalized to the no-DR condition or that both procedures were effective. Results for Frank are depicted in Figure 7 and they differ slightly from those of Bill and Lucas. Frank met the mastery criterion for the DR high/ext condition first. Therefore, the teacher replaced the no-DR procedure with the DR high/ext procedure. Frank acquired this second set of words and simultaneously met the mastery criterion for the DR high/mod condition.

Figure 4. Percentage of independent responses for Eddie when the teacher delivered the highest preference reinforcer (no DR), moderate-preference reinforcer (DR high/mod), and no reinforcer (DR high/ext) for prompted responses, while delivering the highest preference reinforcer for independent responses.

Figure 5. Percentage of independent responses for Bill when the teacher delivered the highest preference reinforcer (no DR), moderate preference reinforcer (DR high/mod), and no reinforcer (DR high/ext) for prompted responses, while delivering the highest preference reinforcer for independent responses.

Figure 6. Percentage of independent responses for Lucas when the teacher delivered the highest preference reinforcer (no DR), moderate preference reinforcer (DR high/mod), and no reinforcer (DR high/ext) for prompted responses, while delivering the highest preference reinforcer for independent responses.

Figure 7. Percentage of independent responses for Frank when the teacher delivered the highest preference reinforcer (no DR), moderate preference reinforcer (DR high/mod), and no reinforcer (DR high/ext) for prompted responses, while delivering the highest preference reinforcer for independent responses.
Acquisition did not occur in the no-DR condition for any of the participants (except for Lucas following acquisition of Sets 1 and 2 with DR high/mod) until the effective DR condition was implemented for that set of stimuli. For Eddie, Bill, and Lucas, the DR condition under which a moderately preferred reinforcer was provided for prompted responses was most effective, whereas for Frank, the DR condition under which no reinforcer was provided for prompted responses was most effective. Thus, based on our participants’ performance during training and their pre-experimental history of prompt dependency (i.e., teacher reports, failure to initiate responses on at least 80% of pre-experimental trials), results of the current study suggest that differential reinforcement that favors independent responses can facilitate acquisition of discrimination tasks and decrease prompt dependency. The most effective arrangement of differential reinforcement, however, may differ across learners.
The present findings replicate and expand on results of past research that has evaluated the effects of differential reinforcement. These data support the results of Olenick and Pear (1984) and Karsten and Carr (2009), who also demonstrated that differential reinforcement of prompted and independent responses facilitated skill acquisition. As in the study completed by Karsten and Carr, the current investigation extends previous research (Olenick & Pear; Touchette & Howard, 1984) by manipulating reinforcer quality instead of rate. The study also extends Karsten and Carr by including a second differential reinforcement condition (DR high/ ext) that was most effective for one participant. Finally, the current investigation is the first to recruit participants with a teacher-reported history of and pre-experimental performance consistent with prompt dependency.
Differential reinforcement may be particularly important for individuals whose responding appears to be dependent on prompts, especially if the dependence is due to a history of receiving the same consequence for prompted and unprompted responses. If differential reinforcement is selected to prevent or remediate prompt dependency, practitioners should systematically assess the reinforcing efficacy of the stimuli delivered for prompted and independent responding. In practice, a multiple-schedule assessment (Reinforcer Assessment 1 in the current study) should suffice as long as differential responding is observed across items. It may also be important to consider the extent of each participant’s history of prompt dependency. In the current study, the procedure that had the greatest disparity between consequences for prompted and independent responses was most effective for Frank. Because Frank was the oldest of the participants, he may have had a longer history of reinforcement for prompted responses.
Several limitations of the current investigation should be noted. First, a baseline condition was not included during training. Although baseline data could have helped to demonstrate that responding was similar across conditions prior to training, we did not include baseline because the participants reportedly did not have any history with the training stimuli. Results of the training phase for Frank also suggest that some carryover effects may account for improved performance on Set 3. A second possibility is that the increase in performance for Set 3 was not the result of carryover effects, but simply a delayed pattern of acquisition reflected for all three sets. Lastly, our mastery criterion differed across participants due to variability in performance. Future studies should select a more stringent criterion from the onset.
Practitioners who are considering the implementation of differential reinforcement for skill acquisition should apply the methods employed here with caution. The current study employed a constant delay for prompt fading. This procedure allowed the participant to emit independent responses before prompt delivery. However, delayed prompts also allowed the participant to emit errors. Although these errors did not seem to hinder learning, previous research suggests that an errorless learning procedure may lead to better attending and accuracy (Terrace, 1963). Practitioners should, therefore, insure that the prompt type and fading procedures selected are effective for each of their clients (for reviews, see Demchak, 1990; Libby et al., 2008).
Future studies should continue to assess the effects of differential reinforcement on skill acquisition as well as alternative methods for addressing prompt dependency. Because research on prompt dependency is scarce, future studies should focus on determining the variables responsible for the development of prompt dependency and, subsequently, ways to prevent its development.
References
Clark, K. M., & Green, G. (2004). Comparison of two procedures for teaching dictated-word/symbol relations to learners with autism. Journal of Applied Behavior Analysis, 37, 503–507. https://doi.org/10.1901/jaba.2004.37-503
Demchak, M. (1990). Response prompting and fading methods: A review. American Journal on Mental Retardation, 94, 603–615.
Fisher, W. W., Kodak, T., & Moore, J. W. (2007). Embedding an identity-matching task within a prompting hierarchy to facilitate acquisition of conditional discriminations in children with autism. Journal of Applied Behavior Analysis, 40, 489–499. https://doi.org/10.1901/jaba.2007.40-489
Karsten, A. M., & Carr, J. E. (2009). The effects of differential reinforcement of unprompted responding on skill acquisition of children with autism. Journal of Applied Behavior Analysis, 42, 327–334. https://doi.org/10.1901/jaba.2009.42-327
Libby, M. E., Weiss, J. S., Bancroft, S., & Ahearn, W. H. (2008). A comparison of most-to-least and least-to-most prompting on acquisition of solitary play skills. Behavior Analysis in Practice, 1, 37–43.
Olenick, D. L., & Pear, J. J. (1980). The differential reinforcement of correct responses to probes and prompts in picture-name training with severely retarded children. Journal of Applied Behavior Analysis, 13, 77–89. https://doi.org/10.1901/jaba.1980.13-77
Oppenheimer, M., Saunders, R. R., & Spradlin, J. E. (1993). Investigating the generality of the delayed-prompt effect. Research in Developmental Disabilities, 14, 425–444. https://doi.org/10.1016/0891-4222(93)90036-J
Sindelar, P., Rosenberg, M., & Wilson, R. (1985). An adapted alternating treatments design for instructional research. Education and Treatment of Children, 8, 67–76.
Smaby, K., MacDonald, R. P. F., Ahearn, W. H., & Dube, W. V. (2007). Assessment protocol for identifying preferred social consequences. Behavioral Interventions, 22, 311–318. https://doi.org/10.1002/bin.242
Terrace, H. S. (1963). Discrimination learning with and without “errors.” Journal of the Experimental Analysis of Behavior, 6, 1–27. https://doi.org/10.1901/jeab.1963.6-1
Touchette, P. E., & Howard, J. S. (1984). Errorless learning: Reinforcement contingencies and stimulus control transfer in delayed prompting. Journal of Applied Behavior Analysis, 17, 175–188. https://doi.org/10.1901/jaba.1984.17-175
Received June 12, 2012
Final acceptance February 27, 2013
Action Editor, Amanda Karsten


