Colleagues and I used US Census data to predict state test results in mathematics and language arts as part of various research projects we have been conducting over the last three years. Specifically, we predicted the percentage of students at the district and school levels who score proficient or above on their state’s mandated standardized tests, without using any school-specific information such as length of school day, teacher mobility, computer-to-student ratio, etc.
We use basic multiple linear regression models along with factors in the US Census data that relate to community social capital and family human capital to create predictive algorithms. For example, the percentage of lone parent households in a community and percentage of people in a community with a high school diploma are two examples of community social capital indicators that seem to be strong predictors of the percentage of students in a district or school that will score proficient or above. The percentage of families in a community with incomes under $25,000 a year is an example of a family human capital indicator that has a lot of predictive power.
In all, our regression models begin with about 18-21 different indicators. We clean the models and usually end up with 2-4 indicators that demonstrate the greatest predictive power. Then we enter those indicators into an algorithm that most fourth-graders, with an understanding of order or operations, could construct and calculate. Not complicated stuff.
Our initial work at the 3rd-8th and 11th grade levels in NJ, and grades 3-8 in CT and Iowa have proven fairly accurate. Our prediction accuracy ranges from 62% to over 80% of districts in a state, depending on the grade level and subject tested.
In one study soon to be published in an education policy textbook co-edited with Carol Mullen, Education Policy Perils: Tackling the Tough Issues, I report on a study in which I predicted the percentage of students in grade 5, at the district level, who scored proficient or above on New Jersey’s former standardized tests, NJASK, in mathematics language arts for the 2010, 2011, and 2012 school years for the almost 400 school districts that met the sampling criteria to be included in the study.
For example, I predicted accurately the percentage of students at the district level who scored proficient or above on the 2011 grade 5 mathematics test in 76% of the 397 school districts and predicted accurately in 80% of the districts for the 2012 language arts tests. The percentage of families in poverty and lone parent households in a community were the two strongest predictors in the six models I created for grade 5 for the years 2010-2012.
Colleagues and I predicted the percentages of students scoring proficient or above for grades 6,7,8 during the 2009-2012 school years as well. For example, we predicted accurately for approximately 70% of the districts on the 2009 NJ mathematics and language arts tests. Recently, another colleague and I predicted the grade 8 NJ mathematics and language percentages proficient or above for over 85% of the almost 400 districts in our 2012 sample.
The results from Connecticut and Iowa are similar, with accurate predictions in CT on all tests grades 3-8 ranging from approximately 70% to over 80%. The Iowa predictions were accurate in approximately 70% of the districts.
Being a “rich” district or a “poor” district had no bearing on the results. We accurately predicted scores for “rich” and “poor” alike. The details will be published in upcoming books and journals so stay tuned.
The findings from these and other studies raise some serious questions about using results from state standardized tests to rank schools or compare them to other schools in terms of standardized test performance. Our forthcoming results from a series of school level studies at the middle school level produced similar results and raise questions about the appropriateness of using state test results to rank or evaluate teachers or make any potentially life-impacting decisions about educators or children.