Assumptions That Support Decisions and Processes
As we have evaluated grant-funded education programs, helped schools with School Improvement Plans, and helped educators move to data-driven decisions, we’ve seen practices that rarely if ever result in good academic outcomes. We’ve mapped out the Skills and Beliefs that lead to what seems to make sense to them as they design these programs. We think that understanding and reviewing what we have seen may help educators learn what they could be doing to get much better outcomes. A few examples follow, including an explanation of this concept.
I was 19 years old, working at Kentucky Fried Chicken, when I learned that people can think in ways that seem logical to them, but that are wrong, or won’t lead to the best outcome. This aha moment (more on that later) served me well years later when I was teaching math.
When I taught math, I noticed some kids understood correctly what I was teaching, some didn’t understand and knew they didn’t understand, while a third group of kids confidently acted as if they totally understood, but they got all wrong answers. I thought kids in this third category must have had an assumption that made their processes seem logical to them, although they were wrong. I tried to figure out how they were thinking, and I saw trends in wrong thinking. Often, kids would read a word problem, correctly deduce that the answer was going to be a larger number than any number in the problem, then choose to multiply or add thinking incorrectly that those two operations always result in larger numbers. In elementary school, their world consisted only of whole, positive numbers, which did result in larger numbers when added or multiplied. So, it wasn’t crazy that they believed that. I got good at figuring what assumptions people were basing their thinking on.
Skill: Knowing What Can Be Known
When I worked in the kitchen at Kentucky Fried Chicken (before it became “KFC,” because “fried” wasn’t a dirty word yet), my job was to pack boxes of chicken as they were ordered. One day, a Little League team had called ahead to order a prepared box order for each team member. Each box would contain two legs, a thigh, and a breast. The boss came in to help cook because of this large order. He cooked lots of chicken, but it was nowhere near what I needed to fill the order. The team came in and the order was not ready. He panicked. I did a little math and told him exactly what I needed. He argued that there was no way for me to know this, but because he was panicked and had no plan for fixing the mess, he cooked what I told him to. ‘Turned out it was exactly right. He was stunned and thought I had some sixth sense that allowed me to know how much chicken to cook. When I asked him how he determined how much chicken was needed to fill a big order, he told me we need to pack “a lotta boxes,” so we needed to cook “a lotta chicken.” This was when I became aware that some people don’t know what can be computed exactly. Since then, I have noticed other people making decisions without knowing what could be known.
This is more common than you may think, especially as more things can be counted, computed, and known because of technology and computers. I call this the “lotta chicken” way of thinking. Not only do the people not know how to find the answer, they don’t know what can be known.
Earlier, when we didn’t have as much data at our fingertips nor the kind of computing power that we have today, there were not as many opportunities for lotta-chicken moments. For example, when schools would generally decide how many advanced middle school math classes to have, the teachers would recommend that many students to enroll in them. No one had to figure out how many seats were needed, nor how to identify kids who had demonstrated content mastery. Teachers would just select 25 kids they thought should take advanced math. Done.
Knowing what can be known is influenced by a person’s experience and skill set. I was watching a murder trial recently, and the defendants didn’t know there were cameras in the woods when they dumped the evidence, or that their phones told where they had been. They didn’t even know they should have paid cash for their supplies. They had charged them on their own credit cards. Here are the three different assumption states I find people in and it helps me understand why they think what they do:
- Data Savvy: Knows something can be known, and either knows how to know it or will find someone who knows.
- Aware: Doesn’t know what can be known, but knows to ask someone if this can be known
- Thinking Traditionally: Thinks this cannot be known
Skills and experience, and underlying beliefs tend to drive how people make decisions and the processes they use. In our decades of evaluating grant-funded education programs we have noticed trends. A few examples follow.
Belief: How to Identify Kids to Align Services
In the past, programs would be for “at-risk” kids, or “kids with high potential in something,” and educators recommended the kids to participate based on their gut feelings. (This is what they told us they used.) When grant-funded programs and school policies started specifying which kids should be served in terms of academic data, we found educators would simply guess who might meet the criteria, and then check to see if they do meet it. We call this the Guess and Check method, which eliminates the false positives, but the false negative rate is very high (i.e., all the kids being served meet the criteria, but many kids who meet the criteria aren’t being served). In contrast, they could just run a report from the data and get a list of all the kids who meet the criteria.
Guess and Check Example
A school system told us all the kids who scored some specific score or higher on a test were in an enriched math class. When we pulled the data, more than half the kids with that score or higher were not in the enriched class. We ran a list and compared it with the class rosters. They were surprised they had missed kids. They explained to us that their method for identifying kids who met the criteria was to have teachers suggest who they think might fit the criteria, then they would pull the file for those kids and check. If the kid met the criteria, they enrolled them. They didn’t realize they’d miss half the kids using this method.
Guess and Not Check Example
Probably just as common, if not more so, than the Guess and Check method is Guess and Not Check. It’s similar to the Guess and Check method, except you end up with false positives and false negatives, i.e., many kids receive the services who shouldn’t and many kids who should receive the services don’t. We have evaluated countless grant-funded programs where students to be served by a program were to fit very specific criteria, such as scored below 22 on the math portion of the ACT and have a GPA less than 2.5. Schools tell us these are the criteria they used for enrolling kids in the program. Then we pull the data file of ACT scores, GPAs, merge them, identify the kids who fit the criteria and compare it to the program rosters and they don’t match at all. In some cases, more than half the kids who met the criteria were not served, and half the kids served didn’t meet the criteria. When we inquire further about how they actually identified kids for the program, they admit that they believed that kids enrolled in a specific math class, or in a tutoring program probably scored below 22 on the ACT, and probably had GPAs under 2.5. They didn’t need to check because it was so logical to them. They used the Guess and Don’t Check method.
Here are the different assumption states:
- 21st Century Awareness: We can run (on a computer, from a file that contains the information) a list of all kids who meet specific criteria, or find someone who can run this list for us.
- Inefficient and Error Prone: We should think of who meets the criteria, then pull up student files one at a time to check these kids.
- A proxy exists for exactly this specific data criteria, so if I use the proxy (e.g., the math class kids are enrolled in, the bus they ride, or the quality of their mother’s handbag, etc.), that is the same as using the specific criteria. (Don’t laugh at the handbag example. One counselor actually told us a kid was in a math class for “at-risk” kids because his mother carried a “knock-off” purse.)
Belief Dimension 2: Cause and Effect
In the pre-NCLB (No Child Left Behind) days, a program would traditionally be designed to deliver some “innovative” service. “Innovative” was a buzzword that got grants approved. Credentialed experts could think of these innovative things. There was no need for a demonstrated link at all between the service and some intended outcome. As evaluators, we just had to document that they did what they said they would. Often, outcomes were survey results about whether people liked the program. Whether or not the program was effective for improving academics or reducing absenteeism (pick your desired outcome) was irrelevant.
One example of an innovative program we evaluated was a husband/wife team who got a dropout prevention grant. They thought it would be innovative to teach some kids how to make YouTube videos. So, they purchased a lot of camera equipment with the grant funds and paid themselves to teach some kids to make YouTube videos. They then judged the kids’ videos and gave them prizes of more time getting lessons from them, paid for by more grant funds. This was funded in the old era and we were hired to evaluate it when the new era had come in. There was no evidence that teaching random kids how to make YouTube videos would reduce the dropout rate.
As we tried to help the next round of grantees write proposals, we had to teach them what it means to provide a service for which there is some link between the service and the intended outcome. We found a lot of people did not know what this meant. Some, obviously, didn’t want to understand because they profited from not understanding. Others genuinely did not understand. They wanted to do things like give iPads to at-risk kids and a place to play basketball, so they would be more likely to pass algebra. They knew at-risk kids can’t pass algebra. And they thought giving them iPads and places to play basketball was innovative. This got funded, so they thought we were the ones who didn’t understand.
Many educators don’t have access to the research journals, nor the time or expertise to read them. So, it was kind of unreasonable to just expect everyone to make this leap. We began summarizing research into “Nuggets,” and providing them to educators as they plan programs to get certain outcomes. Whatworksclearinghouse also is a good source for what is a best practice.
Here are the different assumption states for the cause/effect beliefs:
- I will use research on what services are associated with intended outcomes for specific target groups.
- We’ve always done innovative things for at-risk kids and gotten funded, and because the funding continued, it must be right. I don’t know what “links to outcomes” means. The at-risk kids must benefit from innovative things.
- We are taking advantage of the huge amounts of money available to provide some random service with no accountability for the kids we served or for the outcomes. We are profiting from poverty, and there is nothing you can do about it because this got funded and this is what we said we were going to do.
Test your ability to understand the assumptions. After you submit, scroll down to read your score and see a review of the answers.