The Big Take Away:
Schools can save over 50% on textbook costs without negatively impacting student learning.
The Short Version: Simply substituting open textbooks for proprietary textbooks does not impact learning outcomes.
The Longer Version:
If there’s one thing this project has taught me about data, it’s that they’re messy. I knew this already, but I assumed that with our relatively small year one pilot group (n=7 teachers) things would be easier and cleaner. Instead, what we ended up with was the 2011 CRT (state standardized test) scores for each teacher (these scores are a percentage indicating what proportion of their students demonstrated proficiency on the exam as judged by the state), the 2010 scores for each teacher, and the 2009 scores for only four of the teachers. We had hoped for 2011 plus three years back for every teacher, but some of our teachers are new (no data beyond 2010), some have moved schools (“difficult” to get data beyond 2010), etc. Fortunately there doesn’t seem to be any hidden systematicity to our missing data.
So what did we find? Table 1 gives raw scores.
Teacher T | Teacher U | Teacher V | Teacher W | Teacher X | Teacher Y | Teacher Z | |
---|---|---|---|---|---|---|---|
2009 | 64 | N/A | 54 | 59 | 100 | N/A | N/A |
2010 | 69 | 62 | 44 | 59 | 99 | 88 | 89 |
2011 | 61 | 61 | 58 | 82 | 100 | 83 | 85 |
There are two straightforward ways of asking these data about the impact on student learning of substituting open textbooks for proprietary ones.
The first, noisier way is just to subtract the 2010 scores from the 2011 scores (remember, these scores are the percentage of students achieving proficiency). When we do that, we get a distribution that looks like this: -8% (i.e., 8% fewer students achieving proficiency), -5%, -4%, -1%, 1%, +14%, +23%. The mean of this distribution is +2.86% and the mode is -1%. By either measure of central tendency, there is almost nothing happening in this data.
The second, slightly more stable way of looking for the impact on student learning is to subtract either the average of the 2009 and 2010 scores (when both are available) or the 2010 scores otherwise from the 2011 scores. This gives a slightly better picture of what the “true” previous scores were. This method provides the following distribution: (-5.5%, -5%, -4%, -1%, +0.5%, +9%, +23%), which has a mean of +2.43% and a mode of -1%. Again, there is nothing to see here folks. These aren’t the droids you’re looking for. You can go about your business. Move along.
As a side note, I should point out that we’re keenly interested in what the teachers who saw 23% and 14% jumps did with their open textbooks last year. One of these teachers told me, “the betters students write in their textbooks more.” If this casual observation turns out to be true, and this particular change in pedagogy can be propagated broadly, perhaps we can see wide increases in proficiency scores. We’re looking at just what exactly students are doing with their books more closely this year (with 20+ teachers in this year’s group).
We’ll be running more sophisticated analysis next year with the larger data set, and collecting data on this for a few years to come to improve the stability of the findings, but for the first year pilot this is a fabulous outcome. The implications for students, schools, and districts are “large” indeed. A more formal writeup to come.