We’ve got decades of work to rely on when it comes to figuring out what practices might be most effective for supporting student behavior. But until recently, we didn’t really have effective ways of looking across studies to see if a practice could generally be considered “effective”. This is largely because, in school psychology and special education, we rely heavily on a method for engaging in causal inference called single-case design. Single-case design uses different logic from between-groups designs, which are the experimental designs that most folks are familiar with: they’re the ones that often rely on random assignment of participants to treatment and control groups to determine if an intervention was effective. Unlike between-groups designs, single-case uses multiple predictions of when and how effects will be observed throughout time to infer causality (i.e., evaluating whether the intervention was the thing that caused the change). But because of this uniqueness, and single-case’s long-standing aversion to statistical methods, we didn’t have great ways of synthesizing results across single-case studies. In the last decade or so, however, we’ve made a ton of progress towards being able to do this defensibly. So, I’ve worked with colleagues to apply these new methods to the literature to identify what works in behavior interventions.