Schools and districts frequently implement and test new educational interventions with the goal of improving student learning and outcomes. Sometimes these interventions are classroom-based, and other times they involve changes to teacher training, support, and compensation systems. High-quality program evaluations are essential to understanding which interventions work and their impact. A matched-comparison group design is useful in studying the impact of interventions as it allows an evaluator to make causal claims about the impact of aspects of an intervention without having to randomly assign participants. This brief provides schools and districts with an overview of a matched-comparison group design and how they can use this research methodology to answer questions about the impact and causality of aspects of an educational program.