Its Halloween, so that means its time for a spooky, scary message from the NJDOE about the evils of opting out of PARCC testing and how school superintendents need to put on their scary faces when parents bring up the subject. NJDOE Education Commission David Hespe, distributed a memo on Oct. 31, 2014 “clarifying” some things about opting out of PARCC testing. The major themes were: Don’t allow it and scare parents with discipline and attendance policies (See here: http://bit.ly/1xI8cE5).
The Commissioner is clearly telling superintendents that they need not worry about student-centered responses to opting out. They don’t need to question the efficacy of this testing program. They don’t need to provide alternative settings for children. That leaves only “sit and stare” and discipline referrals as the de facto responses by those school administrators who choose to follow instead of lead. Will professional educators who lead school districts and their board of education officials really follow an educationally bankrupt suggestion? Will they not allow children to participate in learning activities such as sustained silent reading, writing, or other independent education opportunities in lieu of sitting and staring each day? We will see.
Will the professional organizations like the New Jersey Principals and Supervisors Associations, the New Jersey Association of School Administrators, and the New Jersey School Boards Association go along with this mistreatment of students? Only time will tell.
The Commissioner made a very interesting claim: The PARCC will provide diagnostic information about individual students. In the last paragraph of the memo, the Commissioner wrote, “The PARCC assessments will, for the first time, provide detailed diagnostic information about each individual student’s performance that educators, parents and students can utilize to enhance foundational knowledge and student achievement.” The only problem with that statement is that it is not really true.
To diagnose a student’s achievement, at the individual level, of any one skill or standard, the test must have reliability figures around .85 to .95. To attain that level of reliability there then must be about 25 questions per skill or standard (Frisbie, 1988; Tanner, 2001). So, for example, to adequately and accurately diagnose a student’s attainment of CCSS mathematics standard 4.OA.A1, “Interpret a multiplication equation as a comparison, e.g., interpret 35 = 5 × 7 as a statement that 35 is 5 times as many as 7 and 7 times as many as 5. Represent verbal statements of multiplicative comparisons as multiplication equations” there should be around 25 questions on the PARCC related to that standard. That simply will not happen. So any “diagnostic” decisions made about a student’s understanding of specific standards will be potentially flawed.
Another issue is the time frame in which the “diagnostic” results will be received. Think about it like this: When you go to the doctor with an ailment you usually do not have to wait 3,4, or even 5 months for a diagnosis. The doctor performs her formative assessment of you and issues a diagnosis, or at least a tentative diagnosis, on the spot and provides an initial intervention. But the PARCC results from the March and April/May administrations won’t be returned to districts until September or October of the next school year. On top of that, the Common Core is set up so that it is very difficult for teachers to go back and reteach content from last year, so getting results months after the administration of the test will do nothing to aid remediation or close achievement gaps.
Also, keep in mind that parents, teachers, and students will not be able to see every question from the test. The NJDOE said it is cost prohibitive to release all the items. They will make “some” items available. But for a parent or teacher “some” items will only tell “some” of the information needed to make a thorough determination about strengths and challenges faced by individual students. It would be like if your child’s classroom teacher only sent home 10% or 20% of the questions from the weekly tests along with your child’s grade from the test. How does that help?
It goes without saying that there is over 100 years of evidence that demonstrates that commercially prepared standardized tests are influenced too much by out-of-school factors to provide important results. The results we receive tell us more about the child’s home life and neighborhood than what he or she is capable of as a human being. Colleagues and I have spent the last several years using US Census Data to PREDICT the test results on every NJ mathematics and language arts test in most grade levels administered since 2010. We just completed the same research in Connecticut. We have been able to predict the percentage of students scoring proficient or above in a majority of the school districts in NJ and CT using only community and family census data (Tienken, 2014).
Change the Paradigm
Too much emphasis is being placed on these commercially prepared standardized tests and too much taxpayer money is being spent. The teacher is still the best assessment tool because classroom assessments are formative (immediate) in nature, and over time they provide a cumulative, running record of achievement that is more reliable than any standardized test. Maybe that is why high school GPA is a better predictor of first-year college success and overall college persistence than the SAT when controlling for wealth characteristics of the students (Atkinson & Geiser, 2009).
Of course standardized tests can be used as part of a comprehensive assessment system but they should not make up the majority of the system, nor should their results be used as the deciding factor to make important decisions about students and educators, as is being done in NJ. Assessment practices should help to paint a vibrant picture of the whole child, using a “scrapbook” approach. Quantitative and qualitative methods should be used. Results from local, teacher-made assessments should receive more emphasis than a commercially prepared test because overtime their results are more reliable.
We don’t need PARCC or any other corporate test. We have the ultimate assessment tool in the classroom: The teacher. The NJDOE bureaucrats should trust them to do their jobs. I trust them with my children.
If the NJDOE bureaucrats would only listen to stakeholders, we could all develop a system that is diagnostic, student-centered, and includes evidence-based accountability principles.
Atkinson, R.C. & Geiser, S. (2009). Reflection on a century of college admissions tests. Educational Researcher, 38(9), 665-676.
Frisbie, D.A. (1988). Reliability of scores from teacher-made tests. Educational Measurement: Issues and Practice, 7(1), 25-35.
Tanner, D.E. (2001). Assessing academic achievement. Boston, MA: Allyn and Bacon.
Tienken, C.H. (2014). State test results are predictable. Kappa Delta Pi Record, 50(4), 154-156.