This is part 6 of a series that tells a story intended to help people understand the cultural context of changes happening in education.
At-Risk to Using Data
Most federal education grants are multi-year grants. When these paradigm changes happened, we were in the middle of evaluating many federal 21st Century Community Learning Center (21stCCLC) grants, a state-agency-funded after-school program grant in every North Carolina county, and several federal Small Learning Community grants, among others. The program staffs were required to write program goals in terms of measurable academic data and to describe their target populations in terms of data. The grants that had been funded under the previous paradigm when most programs served “at risk” students. The programs were already up and running. They were already serving students who had been referred without data, because it was someone’s professional opinion that they were “at risk.”
Suddenly, program staff had to describe the students as “students who have scored below grade level on either math or reading standardized tests and have been suspended from school in the last two years” instead of “at risk.” That is just an example. We met with staff for all the programs and helped them describe their target populations in terms of this kind of data. Most program staffs seemed confidently able to describe the students they were already serving in terms of academic and behavior data. So, we assumed that they had selected the students to serve from the low-income and minority students who actually met these program criteria. I can’t remember anyone hesitating to describe the students in terms of academic data. One large urban school district described two levels of services they were providing for struggling readers, with one being for students who were more than two years behind and others who were less than two years behind. They gave us the rosters and had the students flagged for us by which program they were in. Staff who were conducting dropout prevention programs describe students as being below grade level, having attendance issues, and having been suspended more than twice in their school career.
How to You Get and Handle Student Data?
We had to learn to obtain student level data on attendance, suspensions, standardized test scores, and course grades. Surprisingly, the very people who told us that the students in their programs met very specific criteria on those variables did not know how to get that data. The federal laws that protect student data (FERPA) allow for program evaluators to have it and use it for accountability for federal grants. The people who had told us they had used that data to identify students had no idea where to get that data. They would pull paper copies of student records one student at a time when they needed data. We learned where the electronic data was and how to get it. The data would be silo-ed on different computers and owned by different departments. It would be coded or in scale scores and we’d need de-coder rings to interpret it. Tracking this data down would take huge amounts of time. This was 2006, but this continues to be the case in 2019. We have just become a lot more skilled at getting it, and we have built a network of people who know how. It still takes forever to get the needed data.
How Did Program Staff Get and Handle Data?
We would wonder how program staff could be so sure that the students being served actually fit the data descriptions when the data was never in a usable form, and might take months before anyone could tell us where the data we needed was. We started interviewing staff and asked what data they used to confirm who was in the programs and where they got it. In nearly every case, they would eventually admit that they did not use actual academic data. They assumed that these students met specific academic criteria because they had been referred to programs for at-risk students. They knew that the federal regulations changed, from serving “at-risk” students to serving students who met specific academic and behavioral criteria. But, they assumed that if they could get the data, it would just be the data that verified these students were being correctly served. They thought this was why these students were called “at-risk.” People started using the vocabulary of data without using actual data.
Hear Ye! Hear Ye! Everything is Changing!
Surely, if the nation were totally changing who would be served by these grant-funded programs, there would have been some kind of training or a big announcement. Right? Wrong. What we had to report totally changed. The programs still primarily operated the same way, serving students who were referred for being seen as “at risk.” But the vocabulary changed.
The mis-match was huge. We saw dropout prevention programs where up to 85% of the students served did not meet any of the academic or behavior criteria that described who the program was for. We saw many remedial reading services where more than half of the students served were fine readers before being served. We also saw that this misalignment of services damaged the students who did not need them, in many ways. They would lose opportunities for rigorous pathways because they were seen as being “at-risk” students. Some programs that were very effective for the students they were intended for looked ineffective when the outcome data was combined for all the students.
We started helping schools use data to identify target populations. One school district had us check all students who were referred for a 21st CCLC program that could only serve students who scored academically below grade level. About 75-80% of students referred were at grade level. The mismatch was larger than it should have been by chance. When we interviewed people to find out why, they often told us they were referring the “at-risk” students who appeared to have the most potential.
Recent Comments