Test acronyms

What is MAP testing? MAP, or the “Measure of Academic Progress,” is a computerized, adaptive test. As students get answers right or wrong, the test moves up and down the learning curve. When the student answers correctly, questions become more difficult. When the student makes a mistake, the questions become easier. Because the computer adjusts the questions as the test progresses, each student will receive a test tailored to that student’s responses. MAP uses the RIT scale (Rasch unIT) to chart a child’s academic growth from year to year, and provides estimations of RIT values for different grade levels.

We are MAP testing three times this year, testing sessions that assess language arts and mathematics. The tests are not specifically timed, but run around an hour in length. If students do not finish in a given session, the test can be paused, allowing students to finish later. In general, students throughout the country will repeat the tests three times in a given school year. MAP is a benchmark test, intended to assess student growth so that teachers can adapt learning as needed.

I like the MAP test more than any of the other standardized tests that I am forced to give. I get nearly immediate feedback from this test with useful breakdowns that theoretically would allow me to target instruction. My ability to take advantage of this information has been hindered this year by the communal lesson plan that I am required to follow, the tests that I must give at certain times and am not allowed to adapt, (They plan adaptations next year since the identical tests for everyone worked about as well as forcing everyone to wear size seven shoes.) and the materials that I am expected to use because everyone else is using those materials. Still, MAP represents a real testing win for me because I learn where my students are deficient. I can at least try to catch them up during tutoring or in small groups.

(Once again, in defense of the district, those students at the top of MAP distribution have benefited significantly in many cases from our freight train of a communal lesson plan. The students at the bottom have not fared as well, though, and have had a scary, rough year.)

A Student Learning Objective (SLO) is another test, a content-specific exam aligned to curricular standards. As part of the SLO process, teachers give their classes tests which cover material they intend to teach over a quarter or semester. Most of the material on the SLO test has not yet been taught. Later, often at term’s end, a second administration of the identical SLO test occurs. The improvement from test to test will be factored into teachers’ final evaluations. SLOs came from a well-intentioned, Illinois law. Unfortunately, as Thomas Edison once observed, “A good intention, with a bad approach, often leads to a poor result.” The SLO test will be my next post.

What is PARCC? PARCC stands for The Partnership for Assessment of Readiness for College and Careers. This test has replaced previous standardized tests in 13 states. As of 2010, 23 states and the District of Columbia were part of the PARCC consortium, but states have been backing out since that time. However, Illinois remains in the consortium and the Illinois State Achievement Test (ISAT) is gone, replaced by PARCC. While paper versions of the PARCC test exist, these new tests are essentially computerized. PARCC tests are aligned to the new Common Core standards, which stress reading comprehension and critical thinking. The test does not yet have a “passing” score, keeping the pressure off school districts this year. After enough student results have been analyzed, PARCC intends to develop a grading scale. Many districts and parents have been attempting to opt out of the PARCC test due to concerns about the appropriateness of various common core standards. Chicago tried to pull out, but changed its position because of the large number of government $$$ the city stood to lose if the test was not administered.

AIMSWEB is another benchmark test, like MAP, although it only tests basic skills, rather than specific content areas. AIMSWEB checks for reading and language skills, math computation, concepts and applications. It requires little student time to administer this test, but vastly more teacher time than MAP. MAP provides me with results. I have to grade AIMSWEB and, frankly, this week it seems like AIMSWEB grading has become the albatross hanging around my neck. When I quit this post, I expect to spend the rest of the evening grading AIMSWEB papers. Plusses for AIMSWEB include the limited loss of student time and the paper and pencil administration. AIMSWEB provides fast, useful feedback once you get through all the cursed grading. I’m sure it makes lots of money for Pearson, too.

Eduhonesty: Let’s note that the test industry makes oceans of dollars for a number of participants. That may be one reason why testing has taken off recently. As the popular catchphrase goes, follow the money. An insane number of dollars must be in play.