Research Status – Evidence for ESSA

Struggling Readers

sad boy with homework

In December, 2015, Congress passed the Every Student Succeeds Act (ESSA). The bill provides federal funding for programs with evidence of effectiveness. The Center for Research and Reform in Education (CRRE) at Johns Hopkins University has compiled a website evaluating the level of evidenced-based research support for various programs.

This page contains links to research summaries for programs aimed at struggling readers. This may include studies with student whose reading difficulties stem from causes other than dyslexia, so inclusion on this list does not necessarily mean a program will meet the needs of dyslexic students.

The ratings of “strong”, “moderate”, and “promising” relate to the quality of the research methodology used, not the overall strength of the research evidence. For example, a program with only a single study of less than 100 students with a very small effect size may be rated as “strong” because of rigorous study design; while another program with multiple studies and greater reported effect size may be rated as only “promising” because of weaker methodology used in the study. For purposes of this page only, the Cohen model has been used for characterizing effect size (0.2=small; 0.5=medium; 0.8=large).

A “Solid Outcomes” rating means that the evidence includes at least two studies with effect sizes of at least +0.20.

Programs with Strong Research Evidence

The programs listed below have at least one randomized, well-conducted study showing significant positive student outcomes, and no studies showing significant negative outcomes. Randomized means that assignment of students to either the study group or the comparison group was random.

Programs with Moderate Research Evidence

These programs have at least one quasi-experimental (i.e., matched), well-conducted study showing significant positive student outcomes, and no studies showing significant negative outcomes. These studies report outcomes of groups of students receiving an intervention comparison groups of similar students, but assignment to groups is not random. For example, a study might compare student outcomes in one classroom or school where the intervention has been introduced with outcomes of demographically similar students in nearby classrooms or schools.

Programs with Promising Research Evidence

These programs have at least one correlational, well-conducted study with statistical controls for selection bias showing significant positive student outcomes, and no studies showing significant negative outcomes. Correlational studies report outcomes based on observations and measurements of the study group, but do not have a control or comparison group.

Not Proven

These programs had qualifying studies which found no significant positive outcomes.

Programs with No Evidence

For the programs listed below, no studies met the inclusion requirements..

This page was most recently updated on November 8, 2023.

Leave a Comment