As an Associate Professor at the University of California, Riverside, I serve as the Program Director of our APA-accredited and NASP-approved School Psychology program. I received my PhD in Educational Psychology (with a concentration in School Psychology) from the University of Connecticut in 2014. Back in 2007, I graduated summa cum laude with my Bachelor’s in Psychology from the University of Arizona. After serving as a Postdoctoral Research Fellow and Project Manager with the IES-funded NEEDs2 project from 2014-2015, I joined the Graduate School of Education at the University of California, Riverside. At UCR, I teach undergraduate- and graduate-level courses in behavior assessment and intervention, research, and methodology. I’m a first-generation college graduate. I really like my job.
I’m an Associate Editor for the Journal of School Psychology, as well as a licensed psychologist in the state of California (CA #29540) and a Board Certified Behavior Analyst (Certification #1-15-18892). You can request copies of articles through ResearchGate, view citations and such on Google Scholar, and download data and project materials from Open Science Framework.
Outside of work, I host a weekly radio show called the Quadraphonic Rock Block on UCR’s campus radio station KUCR, hang out with my family, drink coffee, and am a very mediocre climber.
PhD in Educational Psychology, 2014
University of Connecticut
BA in Psychology, Summa Cum Laude, 2007
University of Arizona
In 1974, Rekers and Lovaas published an article in the Journal of Applied Behavior Analysis (JABA) wherein the authors coached a 4 year old child’s parents to ignore and physically abuse him when he engaged in behaviors that were identified by the authors as inappropriate for a child whose sex assigned at birth was male. In October 2020, a Statement of Concern regarding Rekers and Lovaas (1974) was published in JABA (SEAB & LeBlanc, 2020) which described concerns regarding the paper and then provided justification for the journal’s decision to not retract this paper. In this current response, I provide a counterpoint to the Statement of Concern, arguing that (a) the available evidence strongly suggests that the original study was unethical and misaligned with the principles of ABA, and (b) the evidence presented to support its contemporaneous ethicality is insufficient. I end with an argument that Rekers and Lovaas (1974) should be retracted and discuss the critical role of ethics and social significance for the field of ABA.
In this paper, we provide a critique focused on the What Works Clearinghouse (WWC) Standards for Single-Case Research Design (Standards 4.1). Specifically, we (a) recommend the use of visual-analysis to verify a single-case intervention study’s design standards and to examine the study’s operational issues, (b) identify limitations of the design-comparable effect-size measure and discuss related statistical matters, (c) review the applicability and practicality of Standards 4.1 to single-case designs (SCDs), and (d) recommend inclusion of content pertaining to diversity, equity, and inclusion in future standards. Within the historical context of the WWC Pilot Standards for Single-Case Design (1.0), we suggest that Standards 4.1 may best serve as standards for meta-analyses of SCDs but will need to make clear distinctions among the various types of SCD studies that are included in any research synthesis. In this regard, we argue for transparency in SCD studies that meet design standards and those that do not meet design standards in any meta-analysis emanating from the WWC. The intent of these recommendations is to advance the science of SCD research both in research synthesis and in promoting evidence-based practices.
Research indicating many study results do not replicate has raised questions about the credibility of science and prompted concerns about a potential reproducibility crisis. Moreover, most published research is not freely accessible, which limits the potential impact of science. Open science, which aims to make the research process more open and reproducible, has been proposed as one approach to increase the credibility and impact of scientific research. Although relatively little attention has been paid to open science in relation to single-case design, we propose that open-science practices can be applied to enhance the credibility and impact of single-case design research. In this article, we discuss how open-science practices align with other recent developments in single-case design research, describe four prominent open-science practices (i.e., preregistration, registered reports, data and materials sharing, and open access), and discuss potential benefits and limitations of each practice for single-case design.
Despite its increasing recognition and use in U.S. schools, a limited amount of research has evaluated the effect of restorative justice (RJ) for school violence prevention and response. To date, there is no standardized method for RJ implementation. Therefore, this systematic literature review investigates peer-reviewed studies on the application of RJ practices in K-12 school settings. Ten articles were included in the review. Results of the review indicate a high degree of variability regarding the implementation and evaluation of RJ practices in schools. However, the majority of studies reported positive outcomes, including improved social relationships and reductions in office discipline referrals. The utility of RJ as a school violence prevention and intervention approach are discussed, along with future research directions.
To draw informed conclusions from research studies, research consumers need full and accurate descriptions of study methods and procedures. Preregistration has been proposed as a means to clarify reporting of research methods and procedures, with the goal of reducing bias in research. However, preregistration has been applied primarily to research studies utilizing group designs. In this article, we discuss general issues in preregistration and consider the use of preregistration in single-case design research, particularly as it relates to differing applications of this methodology. We then provide a rationale and make specific recommendations for preregistering single-case design research, including guidelines for preregistering basic descriptive information, research questions, participant characteristics, baseline conditions, independent and dependent variables, hypotheses, and phase-change decisions.
Reliable and valid data form the foundation for evidence-based practices, yet surprisingly few studies on school-based behavioral assessments have been conducted which implemented one of the most fundamental approaches to construct validation, the multitrait-multimethod matrix (MTMM). To this end, the current study examined the reliability and validity of data derived from three commonly utilized school-based behavioral assessment methods: Direct Behavior Rating – Single Item Scales, systematic direct observations, and behavior rating scales on three common constructs of interest: academically engaged, disruptive, and respectful behavior. Further, this study included data from different sources including student self-report, teacher report, and external observers. A total of 831 students in grades 3–8 and 129 teachers served as participants. Data were analyzed using bivariate correlations of the MTMM, as well as single and multi-level structural equation modeling. Results suggested the presence of strong methods effects for all the assessment methods utilized, as well as significant relations between constructs of interest. Implications for practice and future research are discussed.