This is part 5 of a series that tells a story intended to help people understand the cultural context of changes happening in education.
I am calling the cross-over period the time when education is changing from aligning services based on demographic characteristics to using academic data to align services and opportunities. There is a lot of confusion during this period. We are still in this period, although we have moved more into the data era. It is still slow going.
Reverse Logic Confirms Perceptions
When students are viewed as at-risk in a vague, undefined way (e.g., they seem to be poor so they probably struggle academically), and then served by grant-funded programs that offer remedial support or some snake oil, educators perceive them as different from other kids—and not in a good way. During the cross-over period, when federal grants suddenly required describing kids served in terms of academic or behavior data, teachers and administrators routinely described the kids who were served by grant-funded programs as failing, below grade level, and having behavior problems. Sometimes they would be very specific about this data. We learned how common it was for people who had never seen the data to describe the kids served by a program like this. It took us a short time to learn that people’s vocabulary changed before anything else changed. When we’d ask how they knew, we usually learned that the students were in a program that the staff assumed was for kids who fit these criteria. They must fit these criteria because otherwise they would not be in these programs.
Cross-Over Confusion
We saw tremendous confusion during the initial cross-over period. We evaluated many Supplemental Educational Services (SES) programs during the NCLB era. These programs offered additional tutoring services paid for by federal Title 1 funds. Eligible schools that had subgroups that did not make adequate progress for two consecutive years had to provide these services. One university had a federal grant to oversee these programs in many school districts, and we evaluated all of those. Other school districts hired us to evaluate their SES programs. The law for this sanction said schools had to serve all students who were failing or at risk of failing, and it also said they had to serve all students who got free/reduced-price lunch, and only them. (Read that last sentence again and think about it. It was telling to us that of all the school districts we evaluated SES programs in, none were confused by that requirement.) The goal of the program was to bring them to grade level.
We got standardized test scores for all the kids being served and most of them were at or above grade level prior to service. When 25% of a subgroup reads below grade level, the other 75% doesn’t need remedial services. (We applied for and got a federal Dept of Education SBIR grant to study the confusion going on during the cross-over period. We learned a lot.)
The idea that the students who fit the demographic “at-risk” profile but are academically successful don’t need remedial academic services was new at this time. Those other 75% were getting services. And all the services we saw in the SES programs were remedial. Schools could either hire these services or provide them themselves. We saw one school district provide the services themselves and do a very good job. They had book clubs and literacy lessons created by their own teachers. The other programs paid for snake oil. The remedial snake oil was not even good. Snake oil sales were springing up all over the country because of this program.
Apples and Oranges
We started conducting professional development at this time to help school staff understand the differences between demographic characteristics of students and academic needs. Because of the decades of this old paradigm, where these two data points were considered the same and drove how funds were spent, and how many programs were designed, this transition was very difficult for many people. There commonly would be two programs in one school, where one was operating in the old paradigm and was to serve only poor kids, and another operating in the new paradigm and was to serve only kids with low reading or math scores. We discovered that people commonly used enrollment in these programs as a proxy for actual data. For example, they would describe to us that students being served by program A are all reading below grade level. When we’d pull the data and discover that fewer than half of them read below grade level, we’d ask them why they assumed the kids read below grade level. We got many answers, but commonly among them was that they were enrolled in another program or receiving another service that they assumed was for kids who read below grade level. There was a domino effect to these assumptions.
Bad Reactions
A few school districts reacted very badly to the information we’d give them about academically successful kids being served by programs that were to serve students who read below grade level, and other mismatches. Although they hired us to keep them in compliance with their federal grants, or to document whether programs were effective, a few educators seemed to think that if we went away everything would go back to normal and the “at-risk” kids would stay “at-risk.” They also seemed to think that Edstar had just decided to start looking at reading scores for accountability because of our philosophy or something. They didn’t realize this was coming from moving into the 21st century and the era of data, and the change was being dictated by the Department of Education. That handful of educators who thought they could roll back time and keep everything as it was if only Edstar and their idea of using data would go away attacked our reputation by saying we used the wrong data or that we made people feel bad by reporting what the data showed. It wasn’t really clear what they were trying to communicate about it, but it was very interesting that they thought the idea of using data came from Edstar.
Recent Comments