|
A Historical Perspective on Single-Case Methods in Basic and Applied Behavior Analysis |
Friday, September 26, 2025 |
8:30 AM–9:20 AM |
Embassy Suites Minneapolis; Plymouth Ballroom AB |
Area: SCI; Domain: Theory |
CE Instructor: Jennifer Ledford, Ph.D. |
Presenting Authors: JENNIFER LEDFORD (Vanderbilt University), MICHAEL PERONE (West Virginia University) |
Abstract: This presentation will review the origins of single-case research methods in basic laboratory research, how the designs have been used in basic and applied behavior analysis, and the strengths and weaknesses of past and current practices. In considering origins, we will discuss the essential characteristics of single-case research, the steady-state strategy, the importance of replication, the place of statistical inference, and the extent to which the experimenter’s judgment figures into the design and conduct of research. Historically, single-case research in behavior analysis has been associated with inductive and ideographic paradigms. As single-case methods have been extended to applied research in and outside of behavior analysis, they have become more aligned with deductive and nomothetic paradigms. There has also been an increase in the propagation of standards/quality indicators for single-case research and suggestions for changes that allow for the alignment of the outcomes of single-case research with those of other research paradigms (e.g., effect sizes, randomization). We will consider the ways in which the use of different paradigms has led to disparate approaches to single-case research. Understanding these differences is necessary for understanding the empirical contributions of each approach and for successfully prioritizing experimental standards that should be met for individual studies and groups of research. |
Instruction Level: Advanced |
Target Audience: Single-case researchers |
Learning Objectives: 0. Describe the historical trajectory and conventions of single case design methodology. 0. Explain the consequences of the evidence-based practice movement on the practice of single case design. 0. Describe how using different paradigms could lead to different decisions during a study. |
|
JENNIFER LEDFORD (Vanderbilt University) |
Dr. Ledford is an Associate Professor at Vanderbilt University in Early Childhood Special Education and Applied Behavior Analysis. She studies use and synthesis of single case design data, instruction in early childhood contexts, and clinic-based naturalistic intervention for young autistic children. She is the co-editor of a single case textbook, associate editor for two journals, and has published more than 90 peer reviewed articles related to single case design, early childhood special education, and applied behavior analysis. She has received federal grant funding from the Institute of Education Sciences and Office of Special Education Programs. She received Vanderbilt’s Chancellor’s Faculty Fellows award in 2022 and the Division for Early Childhood of the Council for Exceptional Children’s Mentor Award in 2019. |
MICHAEL PERONE (West Virginia University) |
Dr. Perone is a Professor and the Coordinator of the Behavior Analysis Program at West Virginia University. He has been actively involved in the experimental analysis of operant behavior for more than 50 years. He is especially interested in translation of concepts and procedures from the animal laboratory to the analysis of human behavior, including in the areas of conditioned reinforcement, learning under time pressure, verbal reports and self-awareness, quantitative models of choice, and the disruptive effects of incentive shifts. |
|
|
|
|
The Tail Shouldn’t Wag the Dog: Clarifying the Roles That Quality Indicators Should Play in Research |
Friday, September 26, 2025 |
10:35 AM–11:25 AM |
Embassy Suites Minneapolis; Plymouth Ballroom AB |
Area: SCI; Domain: Theory |
CE Instructor: Joseph Michael Lambert, Ph.D. |
Presenting Authors: JOSEPH MICHAEL LAMBERT (Vanderbilt University), TARA FAHMIE (University of Nebraska Medical Center) |
Abstract: Quality Indicators of Single-Case Design (QI-SCD) formalize expert consensus about research conventions intended to ensure internal, external, and social validity in single-case designs. They were developed to prospectively guide the development of research studies, as well as to standardize retrospective appraisals of the quality of evidence in support of a given practice or procedure. Although QI-SCD have proven useful to both endeavors, the unqualified prioritization of QI-SCD can sometimes lead to invalid methodology. It can also have a suppressive effect on the complexity of the research questions we ask as a field and can lead to the oversimplification of extant evidence. In this presentation, we will provide a brief overview of QI-SCDs, will match QI-SCDs to specific empirical objectives (i.e., internal, external, & social validity), and will give attendees practice identifying and justifying the prioritization of some QI-SCDs over others, by logically tethering them to the objective(s) of their research. |
Instruction Level: Advanced |
Target Audience: Single-case researchers |
Learning Objectives: 0. Attendees will articulate the intended purposes of QI-SCDs 0. Attendees will identify the threats posed by misapplications of QI-SCDs 0. Attendees will learn how to appropriately match research questions with relevant QI-SCDs when planning and evaluating SCD research |
|
JOSEPH MICHAEL LAMBERT (Vanderbilt University) |
Dr. Lambert is an Assistant Professor in the Department of Special Education at Vanderbilt University. His areas of expertise include function-based intensive intervention, methods of instruction, generalized learning outcomes, and practitioner and parent training. He has years of experience serving individuals with severe disabilities and supervises the applied field experiences of masters- and doctoral-level professionals training to become BCBAs. Lambert has worked in public and private schools and has trained staff in both settings, as well as in group homes and day centers for adults diagnosed with developmental disabilities. Currently, Dr. Lambert serves as PI for multiple federally funded projects and teaches courses in behavior management, methods of instruction, and theory in behavior analysis. He’s also an Associate Editor for Journal of Positive Behavior Interventions and sits on the editorial boards of Journal of Applied Behavior Analysis and Behavior Analysis: Research and Practice. |
TARA FAHMIE (University of Nebraska Medical Center) |
Dr. Tara Fahmie is a Professor and Director of the Severe Behavior Program at the University of Nebraska Medical Center's Munroe Meyer Institute. She previously held an appointment as associate professor at California State University, Northridge (CSUN). She earned her master’s degree from the University of Kansas and her PhD from the University of Florida. Dr. Fahmie is a BCBA-D and has over 15 years of experience implementing behavior analysis with various populations in clinics, schools, and residential settings. Her main area of expertise is in the assessment and treatment of severe problem behavior; she has conducted research, authored chapters, and received grants for her global work in this area. Her initial interests in the functional analysis of problem behavior and acquisition of social skills in young children led to her emerging passion for research on the prevention of problem behavior. |
|
|
|
|
Questionable and Improved Research Practices |
Friday, September 26, 2025 |
1:55 PM–2:45 PM |
Embassy Suites Minneapolis; Plymouth Ballroom AB |
Area: SCI; Domain: Theory |
CE Instructor: Timothy A. Slocum, Ph.D.Phd |
Presenting Author: TIMOTHY A. SLOCUM (Utah State University) |
Abstract: Scientific progress is driven, in part, by ongoing improvement in research methods. In the past decade, research methodologists have described substantial problems with replication of group comparison research in numerous disciplines and have identified a set of questionable research methods that appear to be important contributors to these problems. Much of the success of behavior analysis stems from the use of single-case experimental research (SCER). Heretofore, the concept of questionable research practices has not been systematically applied to SCER. This session will describe a systematic process to understand and identify potential questionable research practices and alternative improved research practices in SCER. This process included both conceptual development and bottom-up derivation of potential questionable and improved practices through sustained engagement with dozens of experienced SCER researchers as well as a broad survey of over a hundred researchers. Much of the session will be devoted to describing a set of recommended improvements in SCER procedures, data analysis, and reporting practices to further enhance its scientific validity and applied usefulness. These recommendations have immediate implications for conducting, reporting, reviewing, and applying SCER. |
Instruction Level: Advanced |
Target Audience: Single-case researchers |
Learning Objectives: 0. Participants will describe the concepts of questionable and improved research practices as they apply to single-case experimental design. 0. Participants will describe the logic of contingent application of the concepts of questionable and improved research practices. 0. Participants will list 10 critical improved research practices, and describe the contexts in which they are applicable and those in which they are not applicable. |
|
TIMOTHY A. SLOCUM (Utah State University) |
Dr. Timothy A. Slocum earned his doctorate in Special Education at the University of Washington in 1991 and was been a faculty member at Utah State University (USU) in the Department of Special Education and Rehabilitation from 1991 through 2014. He is current professor emeritus. He has been engaged with improving reading instruction and reading research for more than 30 years. In addition, he has also written on evidence-based practice and single-case research methodology including multiple baseline designs, and questionable/improved research practices. He has taught courses at the undergraduate, master’s, and doctoral levels on topics including evidence-based reading instruction, evidence-based practice, single-case and group research methods, advanced topics in behavior analysis, and verbal behavior. Dr. Slocum has received the 2011 Fred S. Keller Behavioral Education award from Division 25 of the American Psychological Association, the 2014 Ernie Wing Award for Excellence in Evidence-Based Education from the Wing Institute, and is a member of the Direct Instruction Hall of Fame. |
|
|
|
|
Systematic Reviews of Single-Case Research: Aims, Challenges, and Good Practices |
Friday, September 26, 2025 |
4:00 PM–4:50 PM |
Embassy Suites Minneapolis; Plymouth Ballroom AB |
Area: SCI; Domain: Theory |
CE Instructor: James Pustejovsky, Ph.D.phd |
Presenting Authors: JAMES PUSTEJOVSKY (University of Wisconsin), DANIEL DREVON (Central Michigan University) |
Abstract: Across domains from health care to education, systematic reviews are used to integrate findings from multiple past studies to inform evidence-based practice recommendations. In areas where single-case designs (SCDs) are widely used, researchers have recognized the importance of incorporating such studies into systematic reviews, and this goal has driven many developments in methodology—as well as some controversies. This session will examine how systematic review methods can account for the unique features of SCD studies. We will first describe core organizing principles of systematic reviews and delineate several distinct types, including scoping reviews, evidence maps, and quantitative meta-analyses. We will then consider three fundamental challenges for systematic reviews focused on intervention efficacy: 1) identifying all relevant evidence; 2) determining how to align findings across studies using heterogeneous procedures; and 3) assessing risks of bias in primary SCD studies. Finally, we will highlight practices that can strengthen the quality of systematic reviews of single-case studies. Presenters will then lead a structured discussion inviting participants to consider a) critiques of published systematic reviews of single-case research; b) limitations of the presenters’ methodological practice recommendations; and c) how primary study practices might need modification to support their inclusion in systematic reviews. |
Instruction Level: Advanced |
Target Audience: Single-case researchers |
Learning Objectives: 0. 1. Recognize distinct types of systematic reviews and select an appropriate review type to match research aims. 0. 2. Describe effective tactics for identifying studies for inclusion in a systematic review and explain the limitations of less effective tactics. 0. 3. Select appropriate effect size metrics for quantifying intervention effects depending on the features of studies included in a review. |
|
JAMES PUSTEJOVSKY (University of Wisconsin) |
Dr. Pustejovsky is a statistician and Associate Professor in the Quantitative Methods program within the Educational Psychology Department at the University of Wisconsin - Madison. His research focuses on the development and application of statistical methods and software tools for research synthesis and meta-analysis, including effect sizes and meta-analysis of single-case research. He has collaborated extensively with researchers from Special Education, School Psychology, and other fields to conduct large-scale syntheses of single-case research. In recognition of his methodological contributions, he received the 2021 early career award from the Society for Research Synthesis Methodology and the 2023 Frederick Mosteller Award from the Campbell Collaboration. He currently serves as an associate editor of Psychological Bulletin, the premiere research synthesis journal of the American Psychological Association. |
DANIEL DREVON (Central Michigan University) |
Dr. Drevon is an Associate Professor in the Department of Psychology at Central Michigan University, where he directs the School Psychology program. He is a Licensed Psychologist in the state of MI and a Nationally Certified School Psychologist. His research focuses on identifying evidence-based academic and behavioral interventions within multi-tiered systems of support and understanding the conditions under which interventions work for school-age children. He has expertise both in conducting applied research on academic and behavioral interventions and in conducting systematic review and meta-analysis, with a particular focus on reviews that apply cutting-edge quantitative analyses to synthesize findings from single-case research studies. |
|
|