Questions about the Questions about the Questions: Wandering in the Dark

all-hail-megatron_n

Especially in academically-struggling districts, our students are now being directly prepared for state tests. What are the costs of almost purely test-focused preparation. What non-test content is being jettisoned to make this happen? In the past, teachers took time to explore personal passions with their students. They might spend a few extra days on the First World War or add a lesson on the Tokugawa Dynasty in Japan. This time spent off or adjacent to the curriculum nonetheless produced test scores that suggest students learned as much or more back then, despite teachers’ deviations.

We don’t know how well teaching directly to the test works. We cannot measure the costs and benefits from a state-test-focused approach to education because we cannot know the results alternative strategies might have provided. We can’t even compare past tests with current tests, because past teaching placed far less weight on that test; testing conditions in the past and today are too different to allow valid comparisons. How would our students have tested if we had provided a more student-focused education for them today? Or a more test-based educational experience in the past? Teaching to the test removes the focus from students, and puts that focus on an annual measuring instrument instead.

Once, educational leaders chose curricula based on what they believed students needed to know for their future. The annual spring test was intended to measure student progress, but its content did not define student progress. In some schools, students might study additional classical topics such as rhetoric, logic, and grammar, as well as Latin, Roman and Greek history, and classics of American literature that have now been replaced by easier reads. In Common Core states, those easier reads tend more often to be nonfiction than in the past.

For the prodigious amount of time and effort we are now putting into testing, we know little about how today’s students compare to students from forty or fifty years ago. Thanks to the Common Core Initiative, we may never be able to even approximate that data. The new PARCC and Smarter Balanced tests, along with other state tests that are being rewritten to match the Common Core, ensure that we cannot compare today’s apples to yesterday’s apples. Students are taking significantly different tests. They are also taking those tests differently, as computers replace paper and pencil. These changes in testing content and testing instruments effectively eliminate our ability to compare test results over time.

safebathroompass

If a million students took a test in 1975, and a million students took the same or a truly similar test in 2005, we could comb our data (assuming we had saved enough of that data) to compare educational results for 1975 and 2005. We could say that Nebraska’s students had answered 67% of a section’s math questions correctly in 1975 and only 52% in 2005. (I made those numbers up for purposes of illustration.) When the same test is employed over time, results can be compared over time. Questions that were changed over time can be eliminated from analysis provided remaining questions still comprise a sample large enough to compare.

Once our students started taking the PARCC test instead of previous tests, our ability to compare student performance over time became immensely more complex. We don’t have apples to slightly different apples now, we have apples to watermelons or even shellfish. With the new emphasis on critical thinking and scenario-based problems, we may have shifted to testing different student attributes as well as different test content.

Eduhonesty: I can’t help but be struck by the extreme irony here. At the same time when education has suddenly become heavily about data and numerical results, governmental requirements and attempts to improve education are pretty much destroying our ability to compare student learning data over time, whether by accident or design or both.