This is part 10 of a series that tells a story intended to help people understand the cultural context of changes happening in education.
Federal Grants Changing How to Target Kids
As the whole way that federal grants operated was changing to be more data-driven, educators who had dealt with federal programs or federal grants in the past were used to the traditional way of doing things, which was often to identify a demographic group of kids and use the grant funds to provide them with some services or resources. Vendors of educational resources marketed products as being for certain demographic groups, because this is how the money was to be spent. Vendors often did not claim their products would lead to any specific outcome; they simply said they were great for a particular demographic group. They weren’t trying to change the kids to no longer need the resource.
Changing Role of Record Keeping
Because traditionally no requirement existed for measuring changes in student outcomes, there was a history of no need to keep good records that could be used for summarizing program data. Most programs had paper copies of documents with information about each student on a page. The purpose of these records was for the program staff to be able to look something up if they needed it, like who can pick the kids up or if they are allergic to peanuts.
Services Were Supposed to Change
Products and services purchased with grant funds were rarely based on evidence of effectiveness. But our evaluation requirements had changed, due to No Child Left Behind and the new PART rating system that was requiring documentation of grant effectiveness. We were suddenly required provide research that would link the services to the intended outcomes. We had to compare pre- and post-outcome data to quantify the effects of programs. Very rarely did program staff look at our evaluation requirements. They understood that our evaluation requirement had changed, but almost no one realized that would have any effect on anything but the evaluation reports. They went about business as usual. As a result, we might be asked to write about how a field trip to a skating rink and a motivational speaker led to more students being proficient in math—only to find that most of the kids served by the program were proficient in math before they were served. Nearly every program we evaluated from 2006-2010 was like this, and many outside those time bounds.
Sacred Cows
When we started helping school districts use data to determine whether programs or resources were effective, people sometimes told us that they could not stop using a program because someone very important in the district chose it and they had to keep it. A high muckety-muck in the district had an expert opinion that the program was good, and their expert opinion had to be respected over what the data or research showed. One assistant superintendent told us some programs are “sacred cows” and they can’t quit using them because of who selected them; effectiveness does not matter If they were required to have a report on the effectiveness of a program now, many saw that as just something to check off the list. But the results had no impact on whether they continued to use the program or resources.
Started Edstar Analytics, Inc.
Our business was evaluating programs, but we started a second company to teach educators how to work within this new paradigm—where the students enrolled in a program needed to meet certain data criteria, the services need to be aligned by research as leading to the intended outcome, and the evaluation will compare pre- and post-outcome data. And records would be kept so that these things could be known. Rather than simply keeping enrollment records on paper, we needed electronic attendance records, and we needed academic or behavior data.
There were many initiatives to get educators to use this new data-driven paradigm. School Improvement Plans were supposed to be data-driven. One large school district called us in to work with School Improvement Teams to help them finish School Improvement Plans they had already started. They had all had Poverty Training to learn how to empathize with poor kids, and some training about how boys are not cut out for school. They had written their plans with no data and didn’t know where to go from there, and we were hired.
Examples of Worlds Colliding
One school had written a plan to get mentors for Black males so they would raise the percent of students who were proficient in math. When we got their data, it showed that most students who were not proficient in math were white. The school had few Black males and nearly all were proficient in math. Another school thought from their Poverty Training that poor kids are not successful in school because of their speech register, their spiritual lives, and because of things like having no light at home for doing homework. They wrote a plan for every staff member to secretly select a poor (or thought-to-be-poor) kid and try to change them to be more like a non-poor kid. And, they were going to not give the poor kids homework because they don’t have lightbulbs and when at the laundromat, they have to watch their clothes or they’ll be stolen so they can’t do homework. When we pulled the data on who they were serving, most of the kids were not poor. And they were already successful at school.
How Do You Address This?
We developed training material to teach educators how to use data, and how to think about using data differently. We learned some very interesting things about how people think. We had the School Improvement Team staff members describe programs in their school, who they were for, and what the intended outcome of the programs were. We and they found that they had no common vision of what the programs were for. Three teachers in the same school would describe a program to which they refer kids and all describe who and what it is for differently. They might agree that it is for poor kids, or for Black males, but when we asked them to describe academically or behaviorally who it is for and what it is trying to change, they usually disagreed.
Yikes
The way these programs were run was wrong on so many levels. Kids were assumed to be poor who weren’t, and then harmful stereotypes were assigned with useless—and sometimes harmful—services rendered. A simple look at the data would have told staff which students needed academic or behavioral interventions. Research-based services could then be provided to kids who could actually benefit from them.
In all of our work evaluating the effectiveness of education programs, we rarely if ever saw any program have a measurable impact. When we started helping schools use data to align services, and use research, keep records that would help them be able to reflect on what they were doing, we saw amazing results. One school we worked with looked at data and found 50 low-income 8th graders who were more successful in math than the kids in the top math class, but they were in standard or even remedial math classes. They enrolled them in 8th grade algebra. They were all successful and their achievement gap in math shrank by double digits. This same outcome was repeated in several other schools. We helped an elementary school set up record keeping for Title 1 reading pull-out services. In doing so, they found that some top level readers were pulled more than once a week by two different people for the same services. They shouldn’t have gotten any of these services. They were missing core instruction to receive services they didn’t need. They had been using income to determine who to provide remedial reading services to, and kept no overall records for monitoring who got what service. By using reading levels instead of income for providing remedial reading services, and keeping standard records to ensure that kids didn’t get double service, their percentage of proficient readers increased by double digits in one year.
We thought people would be thrilled with these kind of results. Were we ever wrong.
Recent Comments