To date, rater accuracy when using Direct Behavior Rating (DBR) has been evaluated by comparing DBR-derived data to scores yielded through systematic direct observation. The purpose of this study was to evaluate an alternative method for establishing comparison scores using expert-completed DBR alongside best practices in consensus building exercises, to evaluate the accuracy of ratings. Standard procedures for obtaining expert data were established and implemented across two sites. Agreement indices and comparison scores were derived. Findings indicate that the expert consensus building sessions resulted in high agreement between expert raters, lending support for this alternative method for identifying comparison scores for behavioral data.