As an Assistant Professor at the University of California, Riverside, I serve as the Program Director of our APA-accredited and NASP-approved School Psychology program. I received my PhD in Educational Psychology (with a concentration in School Psychology) from the University of Connecticut in 2014. Back in 2007, I graduated summa cum laude with my Bachelor’s in Psychology from the University of Arizona. After serving as a Postdoctoral Research Fellow and Project Manager with the IES-funded NEEDs2 project from 2014-2015, I joined the Graduate School of Education at the University of California, Riverside. At UCR, I teach undergraduate- and graduate-level courses in behavior assessment and intervention, research, and methodology. I’m a first-generation college graduate. I really like my job.
I’m an Associate Editor for the Journal of School Psychology, as well as a licensed psychologist in the state of California (CA #29540) and a Board Certified Behavior Analyst (Certification #1-15-18892).
You can request copies of articles through ResearchGate, view citations and such on Google Scholar, and download data and project materials from Open Science Framework.
PhD in Educational Psychology, 2014
University of Connecticut
BA in Psychology, Summa Cum Laude, 2007
University of Arizona
Because it matters!
I study school-based behavior support. Most of my work focuses on how to provide teachers and school psychologists with the tools they need to support kids with challenging behavior. I actively work in this area, and care a lot about making evidence-based products that are usable by educational professionals.
But, as I’ve spent more time in education research, and more time examining my own relationship to and role in racism, diversity, equity, and social justice in the United States, it’s increasingly apparent that good tools for assessment and intervention are a necessary but insufficient condition for progress (to butcher a phrase from causal inference). The fact that it took me until fairly recently to get to that place is disappointing, but also probably not uncommon among folks in my field when you consider that 87% of school psychologists in the U.S. identify as White (by comparison, 50% of kids in U.S. schools identified as White in 2013). So, at the risk of being overly explanatory, let’s lay out some major issues in student behavior support in U.S. schools.
Students who experience significant behavior problems experience some of the worst outcomes of any student group. 35% of all kids who are identified with Emotional Disturbance (ED), a disability category which often includes students whose disabilities are defined by their behavior problems, drop out of high school. Less than a third of kids with ED are employed post-school. These outcomes aren’t just economic or “behavioral”; they also affect the way kids feel, with behavior problems at age 10 predicting depression at age 21. There’s some evidence to suggest that those emotional problems are a function of the decreased academic achievement that may result from childhood behavior problems.
So, problem behavior matters. But what we perceive as problem behavior is not fixed: we’re adults who are making judgment calls about what we see as “normal” or “problematic”. We don’t make those judgments equally for all kids. Students who have a different race or ethnicity than their teacher are significantly more likely to be identified as disruptive, inattentive, or rarely complete homework than students who have a teacher of the same race/ethnicity. To quote Dee (2005),
“the odds of a student being seen as disruptive by a teacher are 1.36 times as large when the teacher does not share the student’s racial/ethnic designation”
This increases to 1.51 when other teacher-level variables like class size and experience level are taken into account.
When we observe what we perceive to be “problem behavior” in schools, our default methods are exclusionary: we send kids to the principal’s office, we give detention, we suspend, and we expel. Students who are African-American experience these outcomes at a rate that vastly outpaces their representation in schools, and this starts as early as preschool.
So, where are we? Well, we have some pretty strong evidence to suggest that race and ethnicity play a significant role in how student behavior is perceived, and we have large-scale data sets that suggest that the exclusionary practices that are our “default” when we think about addressing student behavior in schools (e.g., suspension, expulsion) are much more likely to be administered to kids of color.
Sources like Teaching Tolerance are doing amazing work in facing and addressing issues of social justice in schools, and many folks in school psychology are working to directly examine social justice, diversity, and equity in school psychology. With colleagues at the University of Denver and Howard University, members of my research team and I are asking explicit questions about supporting the mentorship, hiring, and retention of faculty of color in school psychology programs. And I’m working with students in my research lab to examine the extent to which racial/ethnic match or mismatch affect perceptions of student behavior.
In 1974, Rekers and Lovaas published an article in the Journal of Applied Behavior Analysis (JABA) entitled “Behavioral Treatment of Deviant Sex-Role Behaviors in a Male Child,” wherein the authors coached a gender-non-conforming child’s parents to ignore and physically abuse that child when he engaged in gender-non-conforming behaviors. In October 2020, the Society for the Experimental Analysis of Behavior (SEAB) and JABA’s editor-in-chief Dr. Linda LeBlanc published a Statement of Concern regarding Rekers and Lovaas (1974), which described some concerns regarding the paper and then provided justification for the journal’s decision to not retract this paper. In this current response, I describe criticisms of JABA’s rationale for not retracting this paper. I note that the criteria used to determine retraction by SEAB and LeBlanc (2020) were not applied in the manner suggested by official retraction guidelines. I describe contemporaneous criticisms of the Rekers and Lovaas (1974) paper (i.e., Winkler ,1977; Nordyke et al., 1977) which were written by a set of authors that included Donald Baer, one of the foundational figures in applied behavior analysis (ABA). I describe the active discussion within the psychological sciences at the time of publication to depathologize homosexuality. I criticize the 2020 Statement of Concern’s focus on damage to the field of ABA as opposed to the harm done to Kirk, and question errors of commission and omission made by SEAB and LeBlanc in the 2020 Statement of Concern. I end with an argument that Rekers and Lovaas (1974) should be retracted from JABA.
To draw informed conclusions from research studies, research consumers need full and accurate descriptions of study methods and procedures. Preregistration has been proposed as a means to clarify reporting of research methods and procedures, with the goal of reducing bias in research. However, preregistration has been applied primarily to research studies utilizing group designs. In this article, we discuss general issues in preregistration and consider the use of preregistration in single-case design research, particularly as it relates to differing applications of this methodology. We then provide a rationale and make specific recommendations for preregistering single-case design research, including guidelines for preregistering basic descriptive information, research questions, participant characteristics, baseline conditions, independent and dependent variables, hypotheses, and phase-change decisions.
Reliable and valid data form the foundation for evidence-based practices, yet surprisingly few studies on school-based behavioral assessments have been conducted which implemented one of the most fundamental approaches to construct validation, the multitrait-multimethod matrix (MTMM). To this end, the current study examined the reliability and validity of data derived from three commonly utilized school-based behavioral assessment methods: Direct Behavior Rating – Single Item Scales, systematic direct observations, and behavior rating scales on three common constructs of interest: academically engaged, disruptive, and respectful behavior. Further, this study included data from different sources including student self-report, teacher report, and external observers. A total of 831 students in grades 3–8 and 129 teachers served as participants. Data were analyzed using bivariate correlations of the MTMM, as well as single and multi-level structural equation modeling. Results suggested the presence of strong methods effects for all the assessment methods utilized, as well as significant relations between constructs of interest. Implications for practice and future research are discussed.
In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in systematic direct observation and 58.20 hr of previous direct observation experience, scored 6 videos of student behavior using 12 different time-sampling protocols. Five videos were submitted for analysis, and results for observations using momentary time-sampling and whole-interval recording suggested that the majority of variance was attributable to the rating occasion, although results for partial-interval recording generally demonstrated large residual components comparable with those seen in prior research. Dependability coefficients were above .80 when averaging across 1 to 2 raters using momentary time-sampling, and 2 to 3 raters using whole-interval recording. Ratings derived from partial-interval recording needed to be averaged over 3 to 7 raters to demonstrate dependability coefficients above .80.
The implementation of multi-tiered systems in schools necessitates the use of screening assessments which produce valid and reliable data to identify students in need of tiered supports. Data derived from these screening assessments may be evaluated according to their classification accuracy, or the degree to which cut scores correctly identify individuals as at-risk or not-at-risk. The current study examined the performance of mean scores derived from over 1700 students in Grades 1, 2, 4, 5, 7, and 8 using Direct Behavior Rating—Single Item Scales. Students were rated across three time points (Fall, Winter, Spring) by their teachers in three areas: (a) academically engaged behavior, (b) disruptive behavior, and (c) respectful behavior. Classification accuracy indices and comparisons among behaviors were derived using Receiver Operating Characteristic (ROC) curve analyses, partial area under the curve (pAUC) tests, and bootstrapping methods to evaluate the degree to which mean behavior ratings accurately identified students who demonstrated elevated behavioral symptomology on the Behavioral and Emotional Screening System. Results indicated that optimal cut-scores for mean behavior ratings and a composite rating demonstrated high levels of specificity, sensitivity, and negative predictive value, with sensitivity point estimates for optimal cut-scores exceeding .70 for individual behaviors and .75 for composite scores across grade groups and time points.
Recent and Upcoming
In collaboration with colleagues across the country, we’re committed to: