
Julie E. Cubicciotti, Jason C. Vladescu, and Kenneth F. Reeve
Caldwell University
Regina A. Carroll
University of Nebraska Medical Center’s Munroe-Meyer Institute
Lauren K. Schnell
Hunter College
Abstract
Children with autism spectrum disorder are typically taught conditional discriminations using a match-to-sample arrangement. Consideration should be given to the temporal order in which antecedent stimuli (the sample and comparison stimuli) are presented during match-to-sample trials, as various arrangements have been used in the extant literature. The purpose of the current study was to compare the effects of four stimulus presentation orders on the acquisition of auditory–visual conditional discriminations. The study included participants from a clinically relevant population (three children with autism spectrum disorder), employed clinically relevant teaching procedures, and included two presentation formats not included in previous comparison evaluations (simultaneous and sample-first with re-presentation conditions). Results were found to be learner-specific; that is, a different stimulus presentation format was most efficient for each participant. We provide suggestions to evaluate stimulus control topographies and enhance experimental control in match-to-sample arrangements.
Key words: autism spectrum disorder, conditional discriminations, discrete trial training, instructional efficiency, matching to sample, stimulus control
This article is based on a thesis submitted by the first author, under the supervision of the second author, at Caldwell University in partial fulfillment for the requirements of the Master of Arts in Applied Behavior Analysis.
Address correspondence to Jason C. Vladescu, Department of Applied Behavior Analysis, Caldwell University, 120 Bloomfield Avenue, Caldwell, NJ 07006. E-mail: jvladescu@caldwell.edu
doi: 10.1002/jaba.530
© 2018 Society for the Experimental Analysis of Behavior
When developing the procedural arrangement of match-to-sample (MTS) trials, consideration should be given to the temporal order in which antecedent stimuli (i.e., the comparison and sample stimuli) are presented. The sample stimulus could be presented prior to (i.e., sample-first arrangement) or following (i.e., comparison-first arrangement) the presentation of the comparison stimuli. Additionally, the sample and comparison stimuli could be presented simultaneously (i.e., simultaneous arrangement). The effects of temporal order of stimulus presentations during MTS are particularly applicable to consumers with autism spectrum disorder, considering the frequency with which conditional discriminations are established using MTS paradigms for this population. Consumers with autism spectrum disorder, as opposed to their peers of typical development, often require explicit teaching procedures to facilitate differential responding to environmental stimuli (Grow & LeBlanc, 2013). One difficulty that necessitates and complicates instruction is that faulty stimulus control (from the perspective of the teacher) may develop with consumers with autism spectrum disorder (i.e., weak stimulus control and/or inappropriate stimulus control, such as stimulus overselectivity, stimulus bias, or position bias; Pilgrim, 2015). At present, it is unclear how the temporal order of stimulus presentation may influence the development of stimulus control.
In the sample-first procedure, trials begin with the presentation of a sample stimulus, followed by the presentation of two or more comparison stimuli (e.g., Doughty & Saunders, 2009; Petursdottir & Aguilar, 2016). For a sample stimulus that is transient (e.g., the spoken word “lion”), Green (2001) recommends re-presenting the stimulus every 2 s until the individual responds to a comparison stimulus. This procedural variation was recommended to address the relatively brief window of time the sample stimulus is present, and to increase the likelihood that the individual has the opportunity to observe the sample stimulus. The sample-first procedure has a long history of use in the basic literature with both nonhuman and human participants (e.g., Cumming & Berryman, 1961; Saunders & Spradlin, 1989; Sidman & Tailby, 1982; Skinner, 1950) and has been used with some frequency in the applied literature (e.g., Carp, Peterson, Arkel, Petursdottir, & Ingvarsson, 2012; Groskreutz, Karsina, Miguel, & Groskreutz, 2010; Sprinkle & Miguel, 2012). Additionally, several researchers have explicitly recommended the use of sample-first arrangements when teaching conditional discriminations to consumers with autism spectrum disorder (e.g., Green, 2001).
In the comparison-first arrangement, trials begin with the presentation of two or more comparison stimuli, followed by the presentation of a sample stimulus. In the extant literature, applied researchers have used the comparison first arrangement with some frequency (e.g., Delfs, Conine, Frampton, Shillingsburg, & Robinson, 2014; Dittlinger & Lerman, 2011; Fisher, Kodak, & Moore, 2007; Grannan & Rehfeldt, 2012; Grow, Carr, Kodak, Jostad, & Kisamore, 2011; Grow, Kodak, & Carr, 2014; Hanney & Tiger, 2012; Kodak et al., 2015; McGhan & Lerman, 2013). Further, several early intervention (EI) manuals for consumers with autism spectrum disorder describe the use of the comparison-first arrangement (Leaf & McEachin, 1999; Maurice, Green, & Luce, 1996; Sundberg, & Partington, 1998), and this arrangement has been used to eliminate comparison-only control of responding (Carp et al., 2012; Doughty & Sanders, 2009; McIlvane, Kledaras, Stoddard, & Dube, 1990).
A third antecedent stimulus presentation format, the simultaneous presentation procedure, involves presenting the sample and comparison stimuli at the same time. Multiple applied studies have employed this procedure (e.g., CividiniMotta & Ahearn, 2013; Fisher, Pawich, Dickes, Paden, & Toussaint, 2014; Hausman, Ingvarsson, & Kahng, 2014; Paden & Kodak, 2015; Slocum, Miller, & Tiger, 2012; Sy & Vollmer, 2012; Walker & Rehfeldt, 2012).
Although researchers have used the samplefirst, comparison-first, and simultaneous procedures with success, the resulting studies do not establish the conditions under which one procedural variation may be more efficient than another. Comparison studies allow researchers to evaluate relative efficiency and may provide helpful information to practitioners as to the procedural arrangement that is most beneficial for the consumers they serve.
In this vein, Petursdottir and Aguilar (2016) investigated the effects of antecedent stimulus presentation order during a computer-presented
MTS task by comparing the sample- and comparison-first methods for three children of typical development. In the sample-first condition, the experimenters required the participants to make a trial-initiation response, then the sample stimulus was presented, and then four comparison stimuli were presented. In the comparison-first condition, the experimenters required the participants to make a trialinitiation response, four comparison stimuli were presented, and then the sample stimulus was presented. In both conditions, correct responses produced a 4-s computer animation and sound clip. Incorrect responses produced a 4-s blackout, followed by the next trial. All participants demonstrated mastery level responding faster in the sample-first condition, and these results were replicated for all participants.
The findings of Petursdottir and Aguilar (2016) suggest relative superiority of the sample-first procedure when teaching auditory– visual conditional discriminations. However, it is unclear whether these findings hold true for consumers with autism spectrum disorder. More specifically, Petursdottir and Aguilar used a computer to present trials, arranged differential reinforcement of correct responses from the onset of training, and omitted prompting and prompt-fading strategies. When conducting auditory–visual conditional discrimination training with consumers with autism spectrum disorder, instruction is likely to be delivered via tabletop procedures, use nondifferential reinforcement of prompted and unprompted responses (at least during the early stages of teaching; Vladescu & Kodak, 2010), and arrange prompts and prompt-fading strategies. Future research should evaluate the effects of variations in these procedural aspects on the relative efficiency of antecedent stimulus presentation formats. Additionally, Petursdottir and Aguilar did not include a condition to evaluate the stimulus order procedure recommended by Green (2001) when sample stimuli are transient (sample-first with re-presentation), or a condition that commonly appears in the applied literature (simultaneous presentation).
Therefore, the purpose of the present study was to evaluate the effects and relative efficiency of the sample-first, comparison-first, sample-first with re-presentation, and simultaneous presentation formats on the acquisition of auditory–visual conditional discriminations for three participants with autism spectrum disorder. We evaluated the relative efficiency of these conditions by collecting data on training sessions to mastery, training trials to mastery, and total training time. We included total training time as a dependent variable because it is possible that evaluating responding across different measurement scales (e.g., training sessions vs. training time) may yield different conclusions regarding the relative efficiency of training conditions (e.g., Black et al., 2016). Further, we used instructional components that are commonly used to teach consumers with autism spectrum disorder; that is, instruction was delivered via tabletop materials, prompting and prompt-fading strategies were used, and nondifferential reinforcement of unprompted and prompted correct responses were used during initial training sessions.
Method
Participants
Three children with autism spectrum disorder participated. A parent or teacher of each participant completed the Gilliam Autism Rating Scale-Third Edition (Gilliam, 2013) to document behaviors characteristic of autism spectrum disorder. Ratings for all three participants indicated a very likely probability of autism spectrum disorder. All three participants received intervention based on the principles of applied behavior analysis (ABA) in a suburban public-school classroom.
Zeek was an 8-year, 11-month-old male who had begun receiving services based on the principles of ABA at 20 months of age. He obtained standard scores of 62 (Qualitative Description: Extremely Low) and 45 (Extremely Low) on the Expressive Vocabulary Test-Second Edition (EVT-2; Williams, 2007) and the Peabody Picture Vocabulary Test-Fourth Edition (PPVT-4; Dunn & Dunn, 2007), respectively. Zeek scored into Level 3 of both the visual perceptual/match-to-sample and listener domains of the Verbal Behavior-Milestones Assessment and Placement Program (VB-MAPP; Sundberg, 2008) and scored 32 on the Barriers Assessment of the VB-MAPP.
Max was a 3-year, 11-month-old male who had been receiving ABA-based services for approximately 10 months. He obtained standard scores of 79 (Moderately Low) and 70 (Moderately Low) on the EVT-2 and the PPVT-4, respectively. Max scored into Level 2 on both visual perceptual/ match-to-sample and listener domains of the VB-MAPP, and he scored 32 on the Barriers Assessment of the VB-MAPP.
Adam was a 4-year, 3-month-old male who had been receiving ABA-based services for approximately 15 months. He obtained standard scores of 88 (Low Average) and 69 (Extremely Low) on the EVT-2 and the PPVT-4, respectively. Adam scored into Level 3 of the visual perceptual/match-to-sample domain and into Level 2 of the listener domain of the VB-MAPP, and he scored 22 on the Barriers Assessment of the VB-MAPP.
Setting and Materials
All sessions were conducted in a designated room in each participant’s home. The room contained a worktable, chairs, and the materials necessary for the sessions. Session materials included data sheets, pens, a digital timer, preferred stimuli, stimulus binders, and a video camera. The experimenter sat across from or next to the participant at the table during sessions. All sessions were recorded using a video camera. We created four stimulus binders (one for each condition) per participant to present trials. Each 2-in stimulus binder consisted of the following components: a sheet of colored paper (based on the results of a color preference assessment) attached to the cover of the binder, nine trial sheets (one for each trial in a session) consisting of a white piece of paper containing a horizontal array of three pictures, and a blank colored (specific to the condition) piece of paper atop each trial sheet. Comparison stimuli were either realistic colored pictures of animals or scaled black outlines of states filled in with black, approximately 5.08 cm x 5.08 cm in size. The blank piece of colored paper on top of the binder provided an opportunity for the participants to engage in a differential observing response prior to each session (participants were required to touch the paper and tact the corresponding color). A small colored square (specific to the condition) was placed on the table between the participant and the stimulus binder and provided an opportunity for participants to engage in a trial-initiation response prior to each trial to ensure they were oriented to the materials when the sample stimuli were presented (Saunders & Williams, 1998). A trial-initiation response may be particularly important when sample stimuli are transient. As opposed to an observing response, in which participants make a response to the sample, the trial-initiation response was emitted prior to the presentation of the sample (Green, 2001).
Content Validity
To gauge current practices in stimulus presentation in clinical work, we surveyed three behavior analysts and eight behavior technicians prior to beginning the study. These individuals had an average of 5 years (range, 3 years to 13.5 years) of experience working with individuals with autism spectrum disorder. Respondents viewed a PowerPoint presentation depicting the stimulus presentation formats and then completed a survey to report which presentation format they used in practice. Of those surveyed, five respondents reported using the sample-first procedure most frequently, five respondents reported using the comparison-first procedure most frequently, one respondent reported using the simultaneous procedure most frequently, and no respondents reported using the sample-first with representation procedure most frequently.
Design, Dependent Variable, and Interobserver Agreement
During the treatment evaluation, acquisition of auditory–visual conditional discriminations in the sample-first, the comparison-first, the sample-first with re-presentation, and the simultaneous presentation conditions was compared using an adapted alternating-treatments design (Sindelar, Rosenberg, & Wilson, 1985) embedded within a nonconcurrent multiple-baseline-across-participants design. During each session, the experimenter recorded on a data sheet unprompted and prompted correct and incorrect responses, session duration, and comparison responses prior to the presentation of the sample stimulus during comparison-first trials. Unprompted correct responses were defined as the participant emitting the target response prior to the delivery of the prompt. An unprompted incorrect response was defined as the participant emitting a response other than the target response (i.e., error of commission) or no response (i.e., error of omission) prior to the delivery of a prompt. A prompted correct response was defined as the participant emitting the target response after the delivery of the prompt. A prompted incorrect response was defined as the participant emitting an error of commission or omission after the delivery of the prompt.
As in previous evaluations, the experimenter also measured responding to the comparison array prior to the delivery of the sample stimulus in the comparison-first condition (McIlvane et al., 1990; Petursdottir & Aguilar, 2016). We only scored responses to the comparison stimuli in this condition as unprompted or prompted correct or incorrect after the experimenter presented the sample stimulus.
To record session duration, a digital timer was started immediately before beginning the first trial of the session and stopped immediately following the completion of the last trial of the session. The relative efficiency of the four conditions was evaluated by comparing the total training sessions, total training trials, and total training time to mastery or until termination criteria were met. The total training sessions and trials were calculated by adding all of the sessions and trials until the mastery or termination criteria were met for each condition, respectively. Total training time was calculated by adding the cumulative session time required to reach mastery for all targets in each training condition.
Two secondary independent observers scored at least 33% of sessions in vivo or from video for each condition across phases for interobserver agreement (IOA) purposes. Trial-by-trial IOA was calculated by dividing the number of agreements by the number of agreements plus disagreements and converting to a percentage. An agreement was defined as both observers recording the same participant response during a trial and a disagreement was defined as the observers recording different participant responses during a trial. Mean IOA scores for Zeek, Max, and Adam were 99% (range, 83% to 100%), 99% (range, 83% to 100%), and 100%, respectively, across conditions. In addition, the secondary observer collected data on session duration for IOA purposes. Total duration IOA was calculated by dividing the smaller duration by the larger duration for each session and converting to a percentage. Mean duration IOA scores for Zeek, Max, and Adam were 94% (range, 70% to 100%), 95% (range, 83% to 100%), and 94% (range, 77% to 100%), respectively.
Preference Assessments
Parents of each participant completed an experimenter-created survey to identify putative edible reinforcers. The experimenter conducted a paired-stimulus assessment (Fisher et al., 1992) using the top 10 edibles from the survey prior to the beginning of the evaluation. Prior to each session, the experimenter conducted a brief multiple-stimulus-without-replacement assessment (Carr, Nicolson, & Higbee, 2000) using the top five edibles identified from the paired-stimulus preference assessment, in an attempt to control for shifts in preference. The first three items selected were used as the putative reinforcers for the subsequent session. The experimenter also conducted a paired-stimulus color preference assessment (Heal, Hanley, & Layer, 2009) using colored pieces of paper and items to determine participant preference for 10 colors. Four colors that were approached during an approximately equal percentage of trials (to reduce any bias towards one color) were assigned as condition-correlated stimuli.
Target Identification and Assignment
To identify targets for the treatment comparison, we first assembled a pool of potential targets based on each participant’s individual educational goals. The experimenter conducted four pretest trials for each potential target in a random order without replacement (every potential target was presented once, in random order, before it was presented again). One trial was conducted in each presentation format
(i.e., sample-first, comparison-first, sample-first with re-presentation, and simultaneous presentation). If a participant engaged in more than one unprompted correct response for a potential target during pretest trials, it was discarded. The experimenter assigned three targets to each of the four conditions (see Table 1) using a logical analysis (Wolery, Gast, & Ledford, 2014). The logical analysis considered the following dimensions: number of syllables in each target name, redundancy of phonemes across target name, and physical similarity (e.g., orientation, color, size, shape) across comparison stimuli.
General Procedure
Each target was presented three times during a session (i.e., 3 targets x 3 trials = 9 trials per session). At least one session per experimental condition was conducted per day, 1 to 5 days per week with a minimum of 5 min between each session. We conducted sessions for each condition in random order without replacement. For Max and Adam, training continued until the participant demonstrated 100% unprompted correct responding across two consecutive sessions. For Zeek, training continued until the participant demonstrated 89% unprompted correct responding across two consecutive sessions. Mastery criteria were selected to match the criteria arranged in each participant’s educational setting. Training was conducted in the other conditions for a minimum of three additional sessions and a total training time that was 20% or more than the condition mastered first. Once these criteria were met, training was discontinued in a condition as long as there was no apparent increasing trend in unprompted correct responding.
Table 1 Target Sets
|
A constant prompt delay procedure was used in all conditions. A 0-s prompt delay was implemented during the initial training sessions across conditions. During 0-s prompt delay trials, the experimenter provided an immediate model prompt (i.e., the experimenter touched the correct comparison stimulus with her finger) following the presentation of antecedent stimuli. During 0-s prompt delay trials, prompted correct responses resulted in the delivery of praise and an edible. If the participant engaged in a prompted incorrect response, the experimenter removed materials and presented the next trial. We continued to present 0-s prompt delay trials until the participant demonstrated 100% correct prompted responding for two consecutive sessions. Then, the experimenter increased the prompt delay to 5 s. During these trials, if the participant engaged in an unprompted correct response, the experimenter delivered an edible and praise. Following unprompted incorrect responses, the experimenter re-presented the sample stimulus and modeled the correct response and allowed the participant 5 s to respond. If the participant engaged in a prompted incorrect response following the model, the experimenter presented the next trial. If the participant engaged in a prompted correct response, the experimenter provided an edible and praise and then presented the next trial. We delivered only praise following prompted correct responses once the participant demonstrated unprompted correct responding during at least 50% of trials. Prior to the beginning of each trial, participants engaged in a trial-initiation response by touching the condition-correlated colored square placed in front of the stimulus binder (we conducted trial-initiation response training prior to the evaluation; contact second author for details). Once participants engaged in the trial-initiation response, the experimenter provided the antecedent stimuli based on condition-specific procedures.
Baseline. During baseline, the experimenter presented antecedent stimuli (the sample and comparison stimuli) according to conditionspecific procedures (see below) and allowed the participant 5 s to respond. Following unprompted correct and incorrect responses, the experimenter provided a brief verbal statement (e.g., “okay”), then presented the next trial. The experimenter delivered an edible and praise for appropriate collateral behavior (e.g., sitting appropriately at the table) approximately every other trial during the intertrial interval in an attempt to maintain participant responding.
Sample first. For each trial, the participant engaged in the trial-initiation response and the experimenter presented the sample stimulus (e.g., “pigeon”). Immediately after the offset of the sample stimulus, the experimenter removed the blank piece of colored paper to reveal the trial sheet containing the three comparison stimuli.
Sample first with re-presentation. All procedures in place for the sample-first condition were the same. In addition, the experimenter presented the sample stimulus for a second time (e.g., “pigeon”) immediately following the removal of the piece of paper covering the comparison stimuli.
Comparison first. The participant engaged in the trial-initiation response, the experimenter removed the blank piece of colored paper to reveal the trial sheet, waited 3 s, and then presented the auditory sample stimulus (e.g., “pigeon”).
Simultaneous presentation. The participant engaged in the trial-initiation response, then the experimenter removed the blank piece of colored paper to reveal the trial sheet and simultaneously presented the auditory sample stimulus (e.g., “pigeon”).
Procedural Integrity and Procedural Integrity IOA
An independent observer scored the integrity with which the experimenter implemented the condition-specific teaching components (i.e., presented stimulus binder, prompted trial initiation response, presented antecedent stimuli in correct sequence, implemented correct prompt delay, provided correct consequence for correct and incorrect responses, recorded data, and provided appropriate intertrial intervals) for a minimum of 33% of sessions across conditions in vivo or from video. We calculated the percentage of integrity by dividing the total number of skills performed correctly by the total number of opportunities to perform a skill and multiplying by 100. Mean treatment integrity scores were 100% for all participants. A secondary observer also collected procedural integrity data for a minimum of 33% of treatment integrity sessions for IOA purposes. IOA data were calculated on a trial-by-trial basis, where the total number of agreements was divided by the total number of agreements plus disagreements, then converted to a percentage. Mean treatment integrity IOA scores were 100% for Zeek, Max, and Adam.
Results
Figure 1 represents the percentage of unprompted correct responses during baseline and all teaching conditions for Zeek, Max, and Adam. During baseline, all participants engaged in low to moderate levels of unprompted correct responses across conditions. Zeek demonstrated mastery in the comparison-first condition in 23 training sessions (207 training trials; 44 min 40 s training time). He did not achieve mastery in the other three conditions and training ceased according to the termination criteria. Zeek responded to the comparison array in the comparison-first condition prior to the delivery of the sample stimulus (represented by the gray bars in Figure 1) only during baseline (67% of baseline sessions; mean of 16.5% [range, 11% to 22%] of baseline trials in those sessions). No such responding was observed after training was initiated.
Max demonstrated mastery in the sample-first condition in 21 training sessions (189 training trials; 40 min 47 s training time) and in the sample-first with re-presentation condition in 22 training sessions (198 training trials; 42 min 2 s training time). He did not demonstrate mastery of target responses in the comparison-first or the simultaneous conditions. Max engaged in
early responses to the comparison array in the comparison-first condition during baseline (29% of baseline sessions; mean of 16.5% [range, 11% to 22%] of baseline trials in those sessions) and training (67% of training sessions; mean of 23% [range, 11% to 66%] of training trials in those sessions).
Figure 1. The percentage of unprompted correct responses across the sample-first (SF), sample-first with representation (SFRP), comparison-first (CF), and simultaneous (Sim) treatment conditions. Gray bars represent percentage of trials with early responses to the comparison stimuli in the comparison-first condition.

Figure 2. The total number of training trials for Zeek, Max, and Adam across the sample-first (SF), sample-first with re-presentation (SFRP), comparison-first (CF), and simultaneous (Sim) treatment conditions. Asterisks indicate that mastery was achieved in the number of sessions depicted (in the absence of an asterisk, mastery was not achieved).

Figure 3. The number of total training time (minutes) for Zeek, Max, and Adam across the sample first (SF), sample-first with re-presentation (SFRP), comparison-first (CF), and simultaneous (Sim) treatment conditions. Asterisks indicate that mastery was achieved in the number of sessions depicted (in the absence of an asterisk, mastery was not achieved).

Unlike Zeek and Max, Adam mastered target responses in all conditions. More specifically, he demonstrated mastery in 6 training sessions (54 training trials; 7 min 29 s training time) in the simultaneous condition, 8 training sessions (72 training trials; 12 min 2 s training time) in the sample-first with re-presentation condition, 11 training sessions (99 training trials; 16 min 37 s training time) in the sample-first condition, and 12 training sessions (108 training trials; 17 min 37 s training time) in the comparison-first condition. Similar to Zeek, Adam responded to the comparison array prior to the presentation of the sample stimulus in the comparison-first condition during baseline (33% of baseline sessions; mean of 11% of baseline trials in those sessions), but we did not observe such responding after training was initiated. Across participants, we initiated training with trials conducted with a 0-s prompt delay so unprompted correct responding was at zero until we increase the prompt delay to 5 s.
Figures 2 and 3 summarize the total training trials and total training time for all conditions for Zeek, Max, and Adam. We included a summary for conditions in which participants did not demonstrate mastery to show that these measures met the termination criteria when compared to the condition in which participants demonstrated mastery the fastest. For all participants, the lowest total training time was associated with the condition in which participants demonstrated mastery level responding in the fewest number of training sessions.
Discussion
When arranging MTS trials, consideration should be given to the order in which the measured by total training trials and total duration, was learner specific across three children with autism spectrum disorder. The simultaneous procedure was associated with the fastest acquisition for Adam, the comparison-first arrangement was associated with the fastest acquisition for Zeek, and the sample-first and sample-first-with-re-presentation arrangements were associated with similarly fast acquisition for Max. This finding is consistent with the growing body of skill acquisition research that has demonstrated learner-specific outcomes (e.g., Boudreau, Vladescu, Kodak, Argott, & Kisamore, 2015; Carroll, Joachim, St. Peter, & Robinson, 2015; Rodgers & Iwata, 1991). The results suggest that students may benefit when teachers identify and implement a student specific stimulus presentation format when teaching auditory–visual conditional discriminations, rather than using one procedure across all students as has been previously suggested (Green, 2001; Leaf & McEachin, 1999; Maurice et al., 1996; Sundberg, & Partington, 1998).
One potential avenue to identify a consumer-specific stimulus presentation format is through an initial assessment. In a similar vein, recent studies have undertaken efforts to evaluate the usefulness of assessments to identify consumer-specific error-correction procedures (McGhan & Lerman, 2013), prompt type and prompt-fading procedures (Seaver & Bourret, 2014), and reinforcement arrangements (Johnson, Vladescu, Kodak, & Sidener, 2017). However, the current evaluation falls short in providing prescriptive information as to which presentation order should be used in subsequent training of auditory–visual conditional discriminations, because we did not include intrasubject replication. Future studies should include intrasubject replications to establish the reliability of outcomes and evaluate generality to other types of conditional discriminations (e.g., visual–visual conditional discriminations).
For all participants, the data from all four conditions were undifferentiated until mastery was achieved in the first condition or conditions. This indicates that differences in trials to mastery could be, in part, a result of uncontrolled factors. We attempted to address differences among stimuli across conditions and to assign stimuli to the four stimulus sets for each participant so as to ensure equivalence of those sets. However, differences in characteristics among stimuli associated with each condition could have contributed to inconsistent findings across participants. Further, although targets for each participant were selected based on educational goals and previous learning history, we cannot rule out that targets for Zeek (states) and those for Max and Adam (animals) were not of equal difficulty, especially considering Zeek only met the mastery criteria in one condition. Participant instructional history indicated that participants had, at most, three auditory–visual conditional discrimination targets in training at a time. Therefore, it is possible that including twelve concurrent instructional targets in one domain may have impacted acquisition for our participants.
Although we took steps to minimize the possibility of interaction effects—by conducting sessions in a random order without replacement, assigning condition-correlated stimuli (colors), and requiring a minimum of 5 min to elapse between consecutive sessions—we may have observed multiple-treatment interference. That is, a participant’s experience in a treatment session in one condition may have influenced his responding in the subsequent treatment session in another condition (Higgins Hains & Baer, 1989). Future researchers could further minimize the possibility of multiple-treatment interference by increasing the minimum time between sessions of different conditions (e.g., alternate sessions by day). Future researchers could consider including a choice or preference measure for instructional conditions (e.g., Heal et al., 2009), as research suggests that giving participants a choice may be valuable to participants (e.g., Brigham & Sherman, 1973; Tiger, Hanley, & Hernandez, 2006) and may be associated with a decrease in problem behavior (Dyer, Dunlap, & Winterling, 1990).
The findings of the current evaluation contrast with those of Petursdottir and Aguilar (2016), who found that the sample-first procedure was consistently the most efficient across participants. In comparing the present evaluation to the one conducted by Petursdottir and Aguilar, several differences should be noted. Petursdottir and Aguilar presented stimuli via a computer, arranged differential reinforcement from the onset of instruction, and did not include prompting and prompt-fading strategies, whereas we delivered stimuli via tabletop procedures, arranged nondifferential reinforcement during the initial stages of acquisition, and included prompts and a prompt-fading strategy. Although the exact impact of these differences is unknown, it is possible that certain procedural features (e.g., prompt and prompt-fading strategies) may reduce the impact of antecedent stimuli presentation order. Further research is needed to determine how these procedural features influence the development of conditional stimulus control when using different presentation formats.
We included two conditions not evaluated by Petursdottir and Aguilar (2016), and our participants were diagnosed with autism spectrum disorder. Petursdottir and Aguilar did not include the simultaneous or sample-first-withrepresentation conditions, so it is unclear whether these conditions would have been superior for the participants in their study, as they were for Adam and Max in the current evaluation. Whereas Petursdottir and Aguilar’s participants were children of typical development, ours were from a clinically relevant population to examine the relative efficiency of stimulus presentation formats for consumers for whom match-to-sample is commonly used to establish auditory–visual conditional discriminations. One variable that may explain the difference in findings across the current participants is their learning histories. All participants had past and current instructional goals related to establishing auditory–visual conditional discriminations, and therefore had likely been exposed to one or more stimulus presentation formats. This history may be relevant, as previous research (Coon & Miguel, 2012; Freeman & Lattal, 1992) has demonstrated the influence of proximal history on subsequent responding. Future studies should establish the generality of findings through intrasubject replications and evaluate these conditions using participants without established ABA instructional histories.
Interestingly, Max, who reached mastery first in the sample-first and sample-first-with- representation conditions, was the only participant to respond to the comparison array prior to the delivery of the sample stimulus during comparison-first training trials, and to fail to demonstrate mastery in the comparison-first condition. Moreover, Max’s propensity to respond to the comparison array prior to the delivery of the sample stimulus was absent at the beginning of training (although present during baseline) and emerged only after exposure to this condition. These data seem to contrast with previous research (McIlvane et al., 1990) in that we observed an increase, rather than a decrease, in comparison responding prior to the delivery of the sample. Similar to McIlvane et al. (1990), we presented the sample when 3 s elapsed without the participant responding to the comparison array. However, this delay may not have been sufficient to promote appropriate comparison control, and future studies could evaluate the effect of longer delays or alternative procedures (e.g., represent the trial; Petursdottir & Aguilar, 2016).
Future researchers interested in stimulus presentation order should consider a number of factors. First, the current study did not analyze stimulus control topographies across conditions. Therefore, we cannot draw conclusions as to whether any of the presentation formats may promote or reduce irrelevant sources of stimulus control (e.g., position or stimulus biases). Future researchers could collect data on participant responding (e.g., specific comparison stimulus and position selected each trial) to allow for an analysis of undesirable performance patterns (see Fields, Garruto, & Watanabe, 2010).
Second, we did not collect data related to generalization (e.g., exemplars containing variation in noncritical features). Given that the stimulus presentation formats may differentially influence the development of stimulus control, these conditions will necessarily differentially influence the degree to which participants demonstrate generalized responding. Future researchers could conduct tests to determine whether correct responding occurs when stimulus exemplars not associated with training are presented during probe trials.
Third, future researchers could examine whether manipulating additional variables related to comparison and sample stimuli may influence the relative efficiency of stimulus presentation arrangements. For example, it is possible that a specific stimulus presentation order more efficiently establishes stimulus control when an increasing number of stimuli are arranged as comparisons. Increasing the number of comparisons may increase the difficulty of the simultaneous simple discrimination required between comparison stimuli, and in turn, the order in which sample and comparisons are presented may be more relevant.
Three additional limitations are worth mentioning. First, similar to Petursdottir and Aguilar (2016), we did not require an observing response as is typically used in basic research. That is, the presentation of the sample stimulus (in the comparison-first condition) or comparison stimuli (in the sample-first and sample-first with re-presentation conditions) was not contingent on a participant response. Rather, we required a trial-initiation response (touching a colored square of paper). The trial-initiation response could be considered an observing response in that it increases the likelihood that the participant will make sensory contact with the first stimulus presented. It should be noted, however, that it is fairly common practice not to require an observing response or differential observing response in applied studies that target auditory–visual conditional discriminations for participants with autism spectrum disorder (e.g., Carey & Bourret, 2014; Carp et al., 2012; Delfs et al., 2014; Dittlinger & Lerman, 2011; Fisher et al., 2014; Haq et al., 2015; McGhan & Lerman, 2013; Paden & Kodak,
2015). Moreover, recent applied research did not demonstrate consistently superior auditory– visual conditional discrimination acquisition in a condition that required a differential observing response relative to a condition that did not (Vedora, Barry, & Ward-Horner, 2017). Future researchers could evaluate the role of a trial-initiation response and determine what role it may play in establishing conditional stimulus control. Additionally, future studies are needed to clarify the conditions under which an observing response or differential observing response to the sample are necessary during conditional discrimination training.
Second, we did not continue training to mastery in all conditions. That is, once mastery was achieved in one condition, training continued in the other conditions for at least three sessions and 20% additional training time as long as no apparent increasing trend in performance was observed. Training termination was necessary for two participants. As we did not know how much additional training would have been required to achieve mastery in all conditions for all participants, we felt an additional training time of 20% to be substantial enough to make conclusions regarding relative efficiency. We decided to discontinue training to prevent the possible establishment of the presentation of instructional stimuli as a conditioned reflexive motivating operation (Carbone, Morgenstern, Zecchin-Tirri, & Kolberg, 2007), to ensure that we completed the evaluation for all participants prior to the end of the school year, and to maximize the time participants spent receiving effective intervention.
In summary, stimulus presentation order may be an important factor for auditory–visual conditional discrimination acquisition for children with autism spectrum disorder. Future researchers may investigate whether initial assessments, previous learning history, and specific barriers to learning have implications for which stimulus presentation method is most efficient or effective. Further, an assessment of generalization and an analysis of specific response patterns (e.g., stimulus control topographies) could help distinguish what sources of control each condition has on responding.
References
Black, M. P., Skinner, C. H., Forbes, B. E., McCurdy, M., Coleman, M. B., Davis, K., & Gettelfinger, M. (2016). Cumulative instructional time and relative effectiveness conclusions: Extending research on response intervals, learning, and measurement scale. Behavior Analysis in Practice, 9, 58-62. https://doi.org/10.1007/240617-016-0114-3
Boudreau, B. A., Vladescu, J. C., Kodak, T. M., Argott, P., & Kisamore, A. N. (2015). A comparison of differential reinforcement procedures on the acquisition of tacts in children with autism. Journal of Applied Behavior Analysis, 48, 918-923. https://doi.org/10.1002/jaba.232
Brigham, T. A., & Sherman, J. A. (1973). Effects of choice and immediacy of reinforcement on single response and switching behavior of children. Journal of the Experimental Analysis of Behavior, 19, 425-435. doi:10.901/jeab.1973.19-425
Carbone, V. J., Morgenstern, B., Zecchin-Tirri, G., & Kolberg, L. (2007). The role of the reflexive conditioned motivating operation (CMO-R) during discrete trial instruction of children with autism. Journal of Early and Intensive Behavior Intervention, 4, 658. https://doi.org/10.1037/h0100399
Carey, M. K., & Bourret, J. C. (2014). Effects of data sampling on graphical depictions of learning. Journal of Applied Behavior Analysis, 47, 749-764. doi:https://doi.org/10.1002/jaba.153
Carp, C. L., Peterson, S. P., Arkel, A. J., Petursdottir, A. I., & Ingvarsson, E. T. (2012). A further evaluation of picture prompts during auditory– visual conditional discrimination training. Journal of Applied Behavior Analysis, 45, 737-751. https://doi.org/10.1901/jaba.2012.45-737
Carr, J. E., Nicolson, A. C., & Higbee, T. S. (2000). Evaluation of a brief multiple-stimulus preference assessment in a naturalistic context. Journal of Applied Behavior Analysis, 33, 353-357. https://doi.org/10.1901/jaba.2000.33-353
Carroll, R. A., Joachim, B. T., St. Peter, C. C., & Robinson, N. (2015). A comparison of error correction procedures on skill acquisition during discrete-trial instruction. Journal of Applied Behavior Analysis, 48, 257-273. https://doi.org/10.1002/jaba.205
Cividini-Motta, C., & Ahearn, W. H. (2013). Effects of two variations of differential reinforcement on prompt dependency. Journal of Applied Behavior Analysis, 46, 640-650. https://doi.org/10.1002/jaba.67
Coon, J. T., & Miguel, C. F. (2012). The role of increased exposure to transfer-of-stimulus-control procedures on the acquisition of intraverbal behavior. Journal of Applied Behavior Analysis, 45, 657-666. https://doi.org/10.1901/jaba.2012.45-657
Cumming, W. W., & Berryman, R. (1961). Some data on matching behavior in the pigeon. Journal of the Experimental Analysis of Behavior, 4, 281-284. https://doi.org/10.1901/jeab.1961.4-281
Delfs, C. H., Conine, D. E., Frampton, S. E., Shillingsburg, M. A., & Robinson, H. C. (2014). Evaluation of the efficiency of listener and tact instruction for children with autism. Journal of Applied Behavior Analysis, 47, 793-809. https://doi.org/10.1002/jaba.166
Dittlinger, L. H., & Lerman, D. C. (2011). Further analysis of picture interference when teaching word recognition to children with autism. Journal of Applied Behavior Analysis, 44, 341-349. https://doi.org/10.1901/jaba.2011.44-341
Doughty, A. H., & Saunders, K. J. (2009). Decreasing errors in reading-related matching to sample using a delayed-sample procedure. Journal of Applied Behavior Analysis, 42, 717-721. https://doi.org/10.1901/jaba.2009.42-717
Dunn, M., & Dunn, L. M. (2007). Peabody picture vocabulary test (4th ed.). Circle Pines, MN: AGS.
Dyer, K., Dunlap, G., & Winterling, V. (1990). Effects of choice making on the serious problem behaviors of students with severe handicaps. Journal of Applied Behavior Analysis, 23, 515-524. doi:https://doi.org/10.1901/jaba.1990.23-515
Fields, L., Garruto, M., & Watanabe, M. (2010). Varieties of stimulus control in matching-to-sample: A kernel analysis. The Psychological Record, 60, 3-26. https://doi.org/10.1007/BF03395691
Fisher, W. W., Kodak, T., & Moore, J. W. (2007). Embedding an identity-matching task within a prompting hierarchy to facilitate acquisition of conditional discriminations in children with autism. Journal of Applied Behavior Analysis, 40, 489-499. https://doi.org/10.1901/jaba.2007.40-489
Fisher, W. W., Pawich, T. L., Dickes, N., Paden, A. R., & Toussaint, K. (2014). Increasing the saliency of behavior–consequence relations for children with autism who exhibit persistent errors. Journal of Applied Behavior Analysis, 47, 738-748. https://doi.org/10.1002/jaba.172
Fisher, W. W., Piazza, C. C., Bowman, L. G., Hagopian, L. P., Owens, J. C., & Slevin, I. (1992). A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis, 25, 491498. doi:https://doi.org/10.1901/jaba.1992.25-491
Freeman, T. J., & Lattal, K. A. (1992). Stimulus control of behavioral history. Journal of the Experimental Analysis of Behavior, 57, 5-17. https://doi.org/10.1901/jeab1992.57.5
Gilliam, J. E. (2013). Gilliam autism rating scale (3rd ed.). Austin, TX: Pro-Ed.
Grannan, L., & Rehfeldt, R. A. (2012). Emergent intraverbal responses via tact and match-to-sample instruction. Journal of Applied Behavior Analysis, 45, 601605. https://doi.org/10.1901/jaba.2012.45-601
Green, G. (2001). Behavior analytic instruction for learners with autism: Advances in stimulus control technology. Focus on Autism and Other Developmental Disabilities, 16, 72-85. https://doi.org/10.1177/108835760101600203
Groskreutz, N. C., Karsina, A., Miguel, C. F., & Groskreutz, M. P. (2010). Using complex auditory– visual samples to produce emergent relations in children with autism. Journal of Applied Behavior Analysis, 43, 131-136. https://doi.org/10.1901/jaba/2010.43-131
Grow, L. L., Carr, J. E., Kodak, T., Jostad, C. M., & Kisamore, A. N. (2011). A comparison of methods for teaching receptive labeling to children with autism spectrum disorder. Journal of Applied Behavior Analysis, 44, 475-498.
Grow, L., & LeBlanc, L. (2013). Teaching receptive language skills: Recommendations for instructors. Behavior Analysis in Practice, 6, 56-75. https://doi.org/10.1007/BF03391791
Grow, L. L., Kodak, T., & Carr, J. E. (2014). A comparison of methods for teaching receptive labeling to children with autism spectrum disorders: A systematic replication. Journal of Applied Behavior Analysis, 47, 600-605. https://doi.org/10.1002/jaba.141
Hanney, N. M., & Tiger, J. H. (2012). Teaching coin discrimination to children with visual impairments. Journal of Applied Behavior Analysis, 45, 167-172. 0.1901/jaba.2012.45-167
Haq, S. S., Kodak, T., Kurtz-Nelson, E., Porritt, M., Rush, K., & Cariveau, T. (2015). Comparing the effects of massed and distributed practice on skill acquisition for children with autism. Journal of Applied Behavior Analysis, 48, 454-459. https://doi.org/10.1002/jaba.213
Hausman, N. L., Ingvarsson, E. T., & Kahng, S. (2014). A comparison of reinforcement schedules to increase independent responding in individuals with intellectual disabilities. Journal of Applied Behavior Analysis, 47, 55-159. https://doi.org/10.1002/jaba.85
Heal, N. A., Hanley, G. P., & Layer, S. A. (2009). An evaluation of the relative efficacy of and children’s preferences for teaching strategies that differ in amount of teacher directedness. Journal of Applied Behavior Analysis, 42, 123-143. https://doi.org/10.1901/jaba.2009.42-123
Higgins Hains, A., & Baer, D. M. (1989). Interaction effects in multielement designs: Inevitable, desirable, and ignorable. Journal of Applied Behavior Analysis, 22, 57-69.
Johnson, K. A., Vladescu, J. C., Kodak, T., & Sidener, T. M. (2017). An assessment of differential reinforcement procedures for learners with autism spectrum disorder. Journal of Applied Behavior Analysis, 50, 1-14. https://doi.org/10.1002/jaba.372
Kodak, T., Clements, A., Paden, A. R., LeBlanc, B., Mintz, J. & Toussaint, K. A. (2015). Examination of the relation between an assessment of skills and performance on auditory–visual conditional discriminations for children with autism spectrum disorder. Journal of Applied Behavior Analysis, 48, 52-70. https://doi.org/10.1002/jaba.160
Leaf, R., & McEachin, J. (1999). A work in progress: Behavior management strategies and a curriculum for intensive behavioral treatment of autism. New York: DRL Books.
Maurice, C., Green, G., & Luce, S. C. (1996). Behavioral intervention for young children with autism. Austin, TX: PRO-ED.
McGhan, A. C., & Lerman, D. C. (2013). An assessment of error-correction procedures for learners with autism. Journal of Applied Behavior Analysis, 46, 626639. https://doi.org/10.1002/jaba.65
McIlvane, W. J., Kledaras, J. B., Stoddard, L. T., & Dube, W. V. (1990). Delayed sample presentation in MTS: Some possible advantages for teaching individuals with developmental limitations. Experimental Analysis of Human Behavior Bulletin, 8, 31-33.
Paden, A. R., & Kodak, T. (2015). The effects of reinforcement magnitude on skill acquisition for children with autism. Journal of Applied Behavior Analysis, 48, 924-929. https://doi.org/10.1002/jaba.239
Petursdottir, A. I., & Aguilar, G. (2016). Order of stimulus presentation influences children’s acquisition in receptive identification tasks. Journal of Applied Behavior Analysis, 49, 58-68. https://doi.org/10.1002/jaba.264
Pilgrim, C. (2015). Stimulus control and generalization. In F. D. DiGennaro & D. D. Reed (Eds.), Autism service delivery (pp. 25-74). New York, NY: Springer.
Rodgers, T. A., & Iwata, B. A. (1991). An analysis of error-correction procedures during discrimination training. Journal of Applied Behavior Analysis, 24, 775-781. https://doi.org/10.1901/jaba.1991.24-775
Saunders, K. J., & Spradlin, J. E. (1989). Conditional discrimination in mentally retarded adults: The effect of training the component simple discriminations. Journal of the Experimental Analysis of Behavior, 52, 1-12. https://doi.org/10.1901/jeab.1989.52-1
Saunders, K. J., & Williams, D. C. (1998). Stimulus control procedures. In K. A. Lattal & M. Perone (Eds.), Handbook of research methods in human operant behavior (pp. 213). New York, NY: Plenum Press.
Seaver, J. L., & Bourret, J. C. (2014). An evaluation of response prompts for teaching behavior chains. Journal of Applied Behavior Analysis, 47, 777-792. https://doi.org/10.1002/jaba.159
Sidman, M., & Tailby, W. (1982). Conditional discrimination vs. matching to sample: An expansion of the testing paradigm. Journal of the Experimental Analysis of Behavior, 37, 5-22. https://doi.org/10.1901/jeab.1982.37-5
Sindelar, P. T., Rosenberg, M. S., & Wilson, R. J. (1985). An adapted alternating treatments design for instructional research. Education and Treatment of Children, 8, 67-76.
Skinner, B. G. (1950). Are theories of learning necessary? Psychological Review, 57, 193-216.
Slocum, S. K., Miller, S. J., & Tiger, J. H. (2012). Using a blocked-trials procedure to teach identity matching to a child with autism. Journal of Applied Behavior Analysis, 45, 619-624. https://doi.org/10.1901/jaba.2012.45-619
Sprinkle, E. C., & Miguel, C. F. (2012). The effects of listener and speaker training on emergent relations in children with autism. The Analysis of Verbal Behavior, 28, 111. PMC3363411
Sundberg, M. L. (2008). Verbal behavior milestones assessment and placement program: The VB-MAPP. Concord, CA: AVB Press.
Sundberg, M. L., & Partington, J. W. (1998). Teaching language to children with autism or other developmental disabilities. Danville, CA: Behavior Analysts.
Sy, J. R., & Vollmer, T. R. (2012). Discrimination acquisition in children with developmental disabilities under immediate and delayed reinforcement. Journal of Applied Behavior Analysis, 45, 667-684. https://doi.org/10.1901/jaba.2012.45-667
Tiger, J. H., Hanley, G. P., & Hernandez, E. (2006). An evaluation of the value of choice with preschool children. Journal of Applied Behavior Analysis, 39, 1-16. doi:https://doi.org/10.1901/jaba.2006.158-04
Vedora, J., Barry, T., & Ward-Horner, J. C. (2017). An evaluation of differential observing responses during receptive label training. Behavior Analysis in Practice, 22, 1-6. https://doi.org/10.1007/s40617-017-0188-6
Vladescu, J. C., & Kodak. T. (2010). A review of recent studies on differential reinforcement during skill acquisition in early intervention. Journal of Applied Behavior Analysis, 43, 351-355. https://doi.org/10.1901/jaba.2010.43-351
Walker, B. D., & Rehfeldt, R. A. (2012). An evaluation of the stimulus equivalence paradigm to teach singlesubject design to distance education students via Blackboard. Journal of Applied Behavior Analysis, 45, special education and behavioral science (pp. 297-345). 329-344. https://doi.org/10.1901/jaba.2012.45-329 New York, NY: Routledge.
Williams, K. T. (2007). Expressive vocabulary test (2nd ed.). Minneapolis, MN: Pearson Assessments.
Received July 27, 2017
Final acceptance May 17, 2018
Action Editor, Anna Petursdottir


