jQuery (function ($) { $ ('. logo_container a') .attr ('href', 'http://www.edstaranalytics.com/'); });
Select Page

A school system hired Edstar to evaluate all of their dropout prevention programs in the district.  They were wanting to move toward using data-driven decisions rather than professional judgement.  This was a medium to large district.  There were a lot of high schools.  We got data on everyone that was served by their dropout programs over the previous few years.  We had them articulate for us how students were identified to be in these programs, and what the objectives of the programs were.

 

Students were referred to the programs by teachers and school counselors for being “at-risk,” but “having potential.” However, they told us that all of the student were below grade level on standardized tests, had poor reading skills, and were behavior problems–either with suspensions or attendance.  They said their objectives were that these students would pass algebra, and English 1, both of which were required.

Beliefs and Skills of the Staff

Knowing What At-Risk Means: Although they had specific data profiles in mind for who the programs should serve, instead of getting a list of kids who had that profile, they asked for referrals of at-risk kids.  Staff who referred kids thought low-income and minority kids were at-risk. And the brightest ones “have potential,” which is why they referred so many academically successful kids.

How to Identify Kids to Align Services:  They did not know how to use a computer to get a roster of kids who needed help passing algebra or English I.  Many of the kids being served had already passed those classes.

Outcome

We got the data and found that more than 80% of the students they had served in their dropout prevention programs had always scored at or above grade level in both math and reading, had never been suspended, and had no attendance problems.

 
We compared the students who did not fit the profile for the program to a matched comparison group.  The kids who should have never been in the program, but were, dropped out at a significantly higher rate than their control group. About a fourth of these kids dropped out. Being in the dropout prevention programs when they didn’t need to actually damaged these kids.
 
The kids who fit the profile of who the program was designed for dropped out at a significantly lower rate than their control group. We were able to identify which of the programs were effective for the students for whom they were intended.
 
Looking at the data as a whole, the programs looked ineffective. But, they were effective when they served the right kids. We helped them use data to identify the kids in their intended target groups, and showed them how asking for referrals gave mostly the wrong kids.  They started serving the intended students and their dropout rate significantly decreased.  Their achievement gap also closed, probably because they quit putting their brightest minority students into dropout prevention instead of the most rigorous courses.