jQuery (function ($) { $ ('. logo_container a') .attr ('href', 'http://www.edstaranalytics.com/'); });
Select Page

This is part 7 of a series that tells a story intended to help people understand the cultural context of changes happening in education.

Different Kinds of Data

During this time when No Child Left Behind was in full swing, and academic data was first required to be used to measure program success, program staff routinely told us that low-achieving students were served by programs, but when we’d get the actual academic and behavior data for students being served, on average, about 15% of the students in any program actually met the criteria the program staff had used to describe the kids being served.  To their credit, these programs had begun under the old paradigm.  They had to decide what “at risk” meant in terms of academic and behavior data.  They didn’t have any way to look at a data profile of the students they were serving. That was a Herculean task for my staff.  They just made assumptions about what the data would probably be.  And they were very confident.  For the program in which some kids were more two years below grade in reading, and the others fewer than two years below, we found that the majority of the kids were reading above grade level.  We asked how they had identified the kids for the programs. They had to reflect, and think, and then realized they identified them by how poor they thought they were.  They did this by what bus they rode and the clothes they wore.  We often heard they would look at the mothers’ pocketbooks to determine which students were at risk.


The reactions to the information about what kids were actually in these programs has always interested me.  If I weren’t so busy, I would love to do a research study on the reactions.  In one district that had a Small Learning Community grant and was using it to fund Freshman Academies in their high schools, we pointed out that the selection criteria for the program actually didn’t match the data.  The program director and all but one high school project leader were shocked, and then changed their programs.  They had us help them use data to identify the kids who actually met the criteria they described.  They helped the other kids transition out of their dropout prevention Freshman Academies and started serving kids who were academically and behaviorally at risk.  The other school just refused.  The site director explained that he did not like data, and that the research says that minority students are at risk of dropping out.  So, he had identified students who had minority sounding names and selected them at random to be in the program.  He did not want to use data.  At this time, this was his choice.

Cognitive Dissonance

This was a time of cognitive dissonance.  Many of the federal grants that provided services that low-income families can’t afford began requiring that these programs serve only students who score below grade level on standardized tests.  For example, students who couldn’t read at grade level might be offered free vaccinations or after-school care. (I know this makes no sense. They were confusing inability to read well with income level, and requiring academic criteria when doing so made no sense.) I could see from the folks I worked with daily in the schools that this paradigm shift was confusing.  Even the U.S. Department of Education (US DoE) seemed confused.  When they tried to make the transition to provide services to students who struggled academically, instead they created programs that provided after-school care (with or without remediation) to kids who scored below grade level.  Some parents told their children to fail standardized tests so they could keep the high-quality, free, after-school care.  We interviewed program staff and parents to get this information.

When the US DoE threatened to cancel the 21st CCLC after-school funding grants, I wrote to them and provided my findings of years of the well-documented value of after-school programs for kids.  The US DoE had hired a company to evaluate the effectiveness of the program in terms of raising student achievement of low-income kids.  We put a lot of effort in to helping state agencies understand the difference between providing services that low-income students need and their families cannot afford, and programs that should have goals of helping kids overcome a deficiency.  We did volunteer work.  We were instrumental, through the Grants Information Network, in helping both agencies that give grants and that apply for them to understand what was happening with the paradigm change.  We saw that this was very difficult for many people.