Saturday, January 24th, 2015
Comments Off

Some people have tried to categorize me as anti-standards and anti-testing. I am not “anti” standards and testing, I am pro-evidence. The empirical evidence simply does not support the use of one-size-fits-all curriculum standards and high stakes testing as effective tools to improve the education and life outcomes of over 56 million public school students in the third most populous country on the planet. The evidence does not support the notion that higher levels of creativity, innovation, entrepreneurship and other skills that transcend time and subject matter can be achieved through standardization.

As Don Orlich and I explain in Chapter 9 of our book, The School Reform Landscape: Fraud, Myth, and Lies, standards and testing should be mainly locally developed programs, designed to meet the needs of the students of the community and informed from the best available evidence. However, state education bureaucrats in New Jersey, and other states, have put their faith in untested standardization programs. A centralized command and control structure pervades many of the standards and high stakes testing initiatives currently imposed on public school children, their parents, and educators.


The leading state education bureaucrats in New Jersey, and some of their supporters, do not seem to understand the complex nature of human development, classroom instruction, learning, or educating the whole child. In fact, most of their proposals and claims about standardization programs like Common Core and PARCC testing lack high quality research support.

In this writing I argue that the major claims being made to the public about the quality and usefulness of the PARCC tests as teaching and learning tools in New Jersey are not accurate.

Claim 1

One claim made often to support PARCC testing is that New Jersey has had curriculum standards and state testing for a long time, and that the idea behind PARCC testing is not new. Although this statement is somewhat true, the vendors of this claim never tell the public that the original New Jersey Core Curriculum State Standards, adopted in 1997, and the corresponding state tests were not born out of some impressive collection of education research or developed voluntary by educators aimed at providing a world class education to New Jersey’s children.

The original set of state standards and state mandated standardized tests were imposed by the Whitman Administration as part of a lawsuit, known as Abbott versus Burke, over New Jersey’s school funding formula. Her administration was charged with determining how much a “thorough and efficient” education would cost in New Jersey to meet the mandates in the Comprehensive Educational Improvement and Financing Act (known as CEIFA). To do that, she tasked the New Jersey Department of Education (NJDOE) with creating a set of curriculum standards to define the “thorough and efficient” mandate in the state constitution so her administration could put a price tag on education in New Jersey to support her suggested state funding levels. Thus, New Jersey’s first set of mandated curriculum standards and high stakes tests were created to satisfy legal and political mandates, not for educational reasons.

The original state mandated standardized tests given in grades 4, 8, and 11 were designed to be monitoring devices, not diagnostic learning instruments. In fact, one of the assistant commissioners in charge of testing admitted in 2014 that the state mandated tests most recently used, known as NJASK and HSPA, did not provide useful information to inform teaching or policymaking (See here ), yet New Jersey spent hundreds of millions of dollars since 1997 making children take them.

The NJDOE officials also stated that the original New Jersey curriculum standards in language arts and math were also supposedly of low-quality, but that did not stop them from mandating those standards from 1997 up until 2010 when they adopted Common Core State Standards in language arts and mathematics. Keep in mind that New Jersey signed onto the Common Core before the standards were even finished, adopted them about a month after the final set came out, and was a original supporter of the PARCC tests, before the tests were even developed.

So, the original set of state standards and assessments launched in 1997, used to legitimize the latest iteration of standardization and testing, were created to satisfy a court order to put a price tag on education, not to improve the education experiences for our children. Those original standards and tests were later deemed ineffective and of low quality by the NJDOE, the same organization that mandated the original standards and tests. Why should I feel better about PARCC and Common Core just because a state bureaucrat or leader of a taxpayer funded school board or school administrator association tells me we have had state mandated standards and testing for a long time when the original set of standards and tests were broken and built from an economic view point, not an educational one.

Actually, this entire “new and improved” system of standardization and testing is reminiscent of something tried in this country over 100 years ago: the Lancasterian System. The Lancasterian System of education was based on the factory model, standardization, monitoring, and lowering the cost of education. The system failed miserably and is the butt of many jokes in the curriculum research community. Why we would want to return to that type of system is a mystery to me.

Claim 2

Another claim that lacks empirical support is that the results from the PARCC tests will be diagnostic and tell us important things about student learning and the quality of the teaching that our children receive.

This claim is not true. To diagnose a student’s achievement at the individual level, of any one skill, the test results must have reliability figures around .80 to .90. To attain that level of reliability there must be about 20-25 questions per skill (Frisbie, 1988; Tanner, 2001). Now keep in mind there are multiple skills embedded in each Common Core standard, so the PARCC tests would need to have 100’s of questions just to fully assess a few standards. The tests do not have enough questions to diagnose student achievement at the individual level in any of the skills or standards. Thus, any “diagnostic” decisions made from PARCC results about a student’s understanding of specific standards will be potentially flawed.

Another issue is the time frame in which the “diagnostic” results will be received. NJDOE officials stated that results should be returned to districts by around August or September under the best circumstances. Let’s give them the benefit of the doubt and say July. So what? How is that helpful?

I think about like this: When you go to the doctor with an ailment you usually do not have to wait 3,4, or even 5 months for a diagnosis. The doctor performs her formative assessment of you and issues a diagnosis, or at least a tentative diagnosis, on the spot and provides an initial intervention with some follow up testing as soon as possible. But the PARCC results from the March and April/May administrations won’t be returned to districts until September or October of the next school year.

In addition, the Common Core is set up so that it is very difficult for teachers to go back and reteach content from last year, because there is so much content to cover each year. Thus, getting results from the PARCC months after the administration of the test will do nothing to aid remediation or close achievement gaps. The standards are mastery standards. Students are supposed to master every standard every year. There is no room in the curriculum to go back and spend a lot of time on last year’s curriculum. To do that, students will have to miss some of the current year’s curriculum.

As I mentioned in my Halloween post (Here), parents, teachers, and students will not be able to see every question from the PARCC tests. Officials from the NJDOE said it is cost prohibitive to release all the items. They will make “some” items available. But for a parent or teacher “some” items will only tell “some” of the information needed to make a thorough determination about strengths and challenges faced by individual students. It would be like if your child’s classroom teacher only sent home 10% or 20% of the questions from the weekly tests along with your child’s grade from the test. How does that help inform instruction?

Regardless, top officials from the NJDOE already admitted publicly that the PARCC tests are not diagnostic; they are summative. Readers can watch an exchange in which an assistant commissioner explains that the test is not diagnostic. Follow the link and watch from the 1hr. 05 mins. time frame.

Although the results are not diagnostic, NJDOE officials are publicly telling parents and educators that teachers should teach to the test. That is a stunning statement because it is an admission that public school has been reduced to test preparation and that the bureaucrats in the NJDOE see test preparation as the hallmark of “thorough and efficient” education.
Watch here at 1hr. 16mins. 54 secs. as an NJDOE official tells the audience that teachers should teach to the test.

Claim 3

Sadly, results from standardized test most often tell us more about the family and community economics in which a student lives than how much a student knows or can do. MUCH OF A STUDENT’S STANDARDIZED TEST SCORE CAN BE EXPLAINED BY FACTORS OUTSIDE OF THE CONTROL OF SCHOOLS. Colleagues and I have been able to predict results from previous standardized tests in New Jersey, Connecticut, Michigan, and Iowa with a good deal of accuracy.

Through a series of cross-sectional and longitudinal studies completed in New Jersey and other states since 2011, my colleagues and I have begun the process of demonstrating the predictive accuracy of family and community demographic variables in Grades 3-8, and high school

For example, in New Jersey our best models predicted the percentage of students scoring proficient or above on the former Grade 6 NJASK tests in 70% of the districts for the language arts portion of the test and in 67% of the districts for the math portion in our sample of 389 school districts. We accurately predicted the percentage of students scoring proficient or above on the Grade 7 language arts tests for 77% of the districts and 66% of the districts in math for our statewide sample of 388 school districts (Tienken, 2014). We have had similar results in grades 3-8 and 11 in NJ and other states.

Claim 4

Yet another claim being made about PARCC is that the results will tell parents, students, teachers, and the public whether students in grades 3-8 and high school are college and career ready. That is an amazing claim, especially given that the SAT can’t predict very accurately which students will do well their first year of college and beyond. In fact, a student’s high school GPA is generally a more accurate predictor of first year college success and completion, yet bureaucrats and some school administrators claim PARCC will provide better information than the SAT or GPA for New Jersey’s public school children and parents.

If PARCC results can determine college readiness then why don’t any four-year colleges accept the results for admissions purposes in lieu of SAT or ACT scores? See here at approximately 1hr. 47mins 36secs as an NJDOE bureaucrat admits that no four-year college accepts the PARCC scores for admissions purposes.

Should students who score proficient on the PARCC but are not ultimately accepted to the college of their choice be allowed to sue PARCC or the NJDOE for false advertising?

Claim 5

A final point worth making is that the PARCC tests are simply measuring 19th century skills with a 20th century tool (computer). That is because the PARCC tests are aligned to the Common Core State Standards. Those standards mandate knowledge and skills that are not much different than students have received for the last 150 years. However, some speak of the Common Core and the PARCC tests as if they are some revolutionary step forward in education.

A lot has been made about the Common Core being more rigorous but those claims most often come from one privately funded report by a pro-Common Core think-tank. Sure, the Common Core State Standards include verbs like “analyze” in some of the standards, but when one reviews the Standards closely, one notices that students are analyzing for one-right answer – hardly divergent, creative, innovative, or open-ended thinking.

In fact, much of the Core Standards and ALL of the PARCC questions require students to find one best answer. The PARCC tests attempt to achieve the claim of increased rigor by inflating the complexity of the questions through the use of contrived directions and hard to follow tasks.

Can’t Take the Teacher out of Teaching

It seems as if some state education bureaucrats and so-called education reformers are trying to “teacher-proof” assessment through the use of standardized tests like PARCC. You can’t take the teacher out of the assessment equation. The teacher cannot be replaced by a machine or a “canned” assessment program. Over time, teacher assessments provide more detailed and actionable information than standardized tests. Teacher assessments result in less time spent on “test-prep” and more time spent on learning. Teacher assessments employ an approach known as “assessment for learning” whereas high stakes standardized tests rest on a mechanistic foundation of “assessment of learning” akin to weighing children instead of feeding them.

Large-scale projects like the New York Performance Assessment Consortium and the former Nebraska STARS statewide assessment program provide blueprints of how to balance accountability with authentic learning and assessment without inundating children and teachers with standardized tests.

We have the ultimate assessment system already working in our classrooms. It’s called the teacher. Let’s invest in developing teachers’ assessments skills instead of spending millions of dollars on tests that do not tell us anything new about our children.


Frisbie, D.A. (1988). Reliability of scores from teacher-made tests. Educational Measurement: Issues and Practice, 7(1), 25-35.

Tanner, D.E. (2001). Assessing academic achievement. Boston, MA: Allyn and Bacon.

Tienken, C.H. (2014). State test results are predictable. Kappa Delta Pi Record, 50(4), 154-156.

Share on FacebookTweet about this on TwitterEmail this to someone
Christopher H. Tienken ©2012 Copyright. All Rights Reserved.