jQuery (function ($) { $ ('. logo_container a') .attr ('href', 'http://www.edstaranalytics.com/'); });
Select Page

This is part 9 of a series that tells a story intended to help people understand the cultural context of changes happening in education.

US DOE and the PART System

As the federal grant world gradually changed, people who didn’t work with federal education grants as a main part of their job often were not aware of how major these changes were.  No meetings nor memos explained the changes.    The new way of doing things was described, but never contrasted to the old way.  The US Department of Education started talking about data-driven models, but they didn’t point out how different this was from the traditional model of delivering services.  Everyone was left to understand the changes by themselves.

Under President George W. Bush, the government created a method of assessing whether federal programs were effective or whether we even knew what the intended purpose of the programs were.   The government created the Program Assessment Rating Tool (PART).  The PART system assigned scores to programs based on goals appropriate for the students being served, the services being related to the goals, and student success measured against quality standards and assessments.

PART gave the rating of “Results Not Demonstrated” to programs that could not demonstrate whether they had been effective or not because of lack of data or clear performance goals.  PART awarded almost half of US Department of Education grant programs this rating and instructed staff to modify the grants so that its effectiveness could be determined.  If the modifications were not made, the funding for that grant ended.  Some education grants were stopped because they couldn’t make the modifications, thus illustrating the difficulties of making this transition to outcome-based accountability.

Data-Driven Evaluation Without Data-Driven Programs

During the time of the PART ratings of federal grants, we were hired to evaluate a state-wide grant-funded program.  We were working for a state agency that oversaw the grants and we were to evaluate them all.  We were hired a year after the first grants were implemented and the second round of grants had already been awarded.  Most of the grants had been written under the traditional paradigm.  The people reading and scoring the grants to decide which ones to fund were probably used to the traditional paradigm of grants, where services would be provided to “at-risk” kids.  We were asked to document the effectiveness of the grants, like the PART system was doing in DC.  Even when grants themselves had not changed to the new paradigm, accountability for them sometimes did.  There was a big disconnect here.

Results Not Demonstrated

The first round of grants was winding up and we found most had kept no records that could be used to measure effectiveness.  Most had a goal that they would provide some service—not that any outcome would change.  Grant staff were serving kids based on demographic characteristics and did not know anything about their academic or behavior data or what their needs would be.  Most services being provided had nothing to do with being successful in school or graduating, yet this is what we were to measure to document effectiveness.  These were the overall goals for these funds. All we could do for that first round of grants was bean counting about how many kids were served, and even that was hard because some programs kept poor or no records.  Nearly all the programs would have been classified “Results Not Demonstrated.”

Focus on What Can Be Changed

Although the second round of grants had been awarded, they had not yet begun.  We were allowed to work with the grantees to help them articulate which kids they intended to serve, what outcomes they hoped to get for the kids, and how that related to the overall goals of the grants.  Helping the grantees articulate what they hoped to do in terms of outcomes for the kids they served really opened our eyes.  We worked with them one-on-one, had large professional development sessions, and provided technical support.  We discovered how foreign this paradigm was to most of them.

We provided research summaries to help them select services that were aligned to the needs of kids.  Most of them had no idea where to get this kind of information.  They were used to marketing literature.  We had also analyzed state-wide longitudinal data for each school system to identify the top 3 changeable reasons kids drop out in their districts.  For example, if a top reason is that kids can’t pass a required math class, that is changeable.  If a top attribute is that they are poor, that is not changeable.  However, if the poor kids can’t afford math tutors so they drop out, a grant could target poor kids who have failed a required math class and provide them with a math tutor.  A grant objective would be that targeted kids would retake and pass that math class.  We could measure this, and this relates to success in school and to being able to graduate.

Trying to Write Measurable Objectives for Traditional Programs

We found that almost no one knew which data would be relevant for measuring which objectives.  Many didn’t know what data existed or how to interpret it.  Both non-profits and school systems had these grants.  After our help with how to write measurable objectives and research on what is effective for having an impact on some changeable outcome, many still wrote objectives that made no sense in this new paradigm.  We learned that there are many ways to write objectives that make no sense.  I remember one non-profit came up with objectives that after being taught how to be more organized, some percent of the kids would take and pass an Advanced Placement class.  I pointed out to this team that they were serving middle school students, and there are no middle school AP classes.  Some schools wrote goals in terms of the scale score on a standardized test increasing from one year to the next.  They didn’t know that the scale scores are in different ranges for each grade and you can’t compare them outright.  One school system administrator got upset with us because they wanted to write their objective to have 3% of the kids who had failed algebra retake it and pass.  We had them get a list of the kids who had failed, and 3% would have been one kid.  They said their accountability department told them that 3% is usually a good number for an objective.  They insisted that they wanted to keep their objective at 3%.  This was simply a statement to them, not a real metric.  We told them that this is a lot of money to try to get one kid to pass a math class ($175,000).  We suggested they simply offer one kid $175,000 if he can pass a math class.  One group of grantees belonged to a non-profit that exists statewide.  They told us their mission is to serve “at-risk” kids and provide certain services that are not research based or even very easy to describe, so they wanted to be exempt from having measurable objectives.  And they were exempt.  A third of the grantees either refused or were never able to write objectives for which effectiveness could be measured.  Some grantees wrote decent objectives but then, at the end, when they compared pre to post outcome data, they discovered many kids they served had already exceeded the objective before services.  They had done what we commonly saw—they assumed some demographic characteristic could be used as a proxy for academic data.

We finally were able to get many of the grantees to write meaningful objectives, provide appropriate services, and serve kids who could actually benefit from the services.  Among the agencies who complied with these three simple criteria, we were able to evaluate their programs and discover what worked.