When my children were little, attending a conference or visiting colleges meant bringing something home for each of them.  My daughter was generally happy with a stuffed animal from the college bookstore, whereas my son’s tastes were more eclectic.  When I visited colleges in upstate New York, he asked me to look for a child-sized suit of armor.  I was unsuccessful, but did find one a couple of years later— as part of an exhibit at the Art Institute of Chicago.

 

He also went through a phase where he collected College Board memorabilia.  That was cheaper and easier to find than a suit of armor, but I was nevertheless relieved when he grew out of that.

 

The best piece of College Board swag ever was the slide-rule-like score converter produced back in the 1990’s when the SAT was re-centered.  I thought about that score converter yesterday when the College Board finally released a concordance comparing scores on the new version of the SAT with those on the old version.

 

You read it here first—or could have.  Back in December when PSAT scores came out and the furor focused on the difficulty in getting scores, I pointed out that the larger issue that no one had addressed was the fact that the scores for my students were significantly higher—approximately 30 points per section at each grade level—than on the old PSAT.  Was that a reflection that my students were particularly well suited to take the new version of the test, or did it reflect a new, quiet effort at re-centering scores?

 

Except it’s not really re-centering.  The re-centering back in the 1990’s was a correction to a changing spread of scores.  The scaled scores on the 200-800 scale were designed for a normal distribution of scores with 500 as the midpoint, and by the mid-1990’s the national mean on the verbal section had dropped closer to 400 than 500.  Re-centering was an attempt to restore the midpoint at 500.

 

What we have here is not re-centering, but off-centering.  The national mean for the old SAT was around 1010 (Critical Reading + Math), whereas the concordance released yesterday suggests that the national mean for the new SAT will be closer to 1090, with scores at each data point 20-40 points higher as one moves away from the extremes.  Jon Boeckenstedt’s blog has an infographic that shows that most clearly.  

 

So what’s going on here?  Twenty years ago re-centering was an explicit goal of the change in the test, but there has been no discussion (that I recall seeing) of re-centering being a goal in the move to the new SAT.  Was re-centering a hidden but intentional goal of the move to a new test or an unintended consequence?  Is the new SAT “easier”?  More in tune with, and therefore a better measure of, what students are learning in school?  Or, as suggested by Adam Ingersoll in a post on the NACAC Exchange, is the increase in scores a consequence of eliminating the penalty for guessing?

 

Also on the Exchange, Jennifer Karan from the College Board responded to questions about the scores by pointing out that the new SAT is a fundamentally different test than the old SAT, testing a different set of skills and content.  She also made the comment that it is scored on a different scale, albeit one that uses the same starting and end points.

 

She’s right that the test is different and not designed for score comparison, but the scale is the same, 200-800.  What is different is the distribution of scores.

 

If re-centering was a goal, is there anything wrong with that?  The answer is no, as long as all of us—the CB, colleges, secondary schools, the public—understand the change and don’t claim record score increases in our profiles next year without proper context.  But if that’s the goal, then let’s be upfront about it, and if that’s the goal, maybe it’s time to move away from the 200-800 scale to ease confusion.  

 

Now that we have a way to compare scores on the new and old versions of the SAT, there are two other issues that need addressing.  One is that the new score report contains new subscores and cross-scores, scores that might ultimately provide more useful information about a student’s strengths and weaknesses than the traditional section scores alone.  The problem is that I haven’t seen any information about those scores to help understand what a 27 on a 10-40 scale in Analysis in Science or an 8 on a 1-15 scale in Data Analysis and Problem-Solving really mean.  Can we get some meaning and context?

 

The other issue is figuring out a better way to distribute scores, at least to schools.  We still haven’t received our scores from January.  I understand that it’s a new system, and I understand that I must be the one at fault because the College Board never acknowledges responsibility or takes blame, but the score reporting fiasco makes the rollout of the Coalition look good by comparison.  That’s not a compliment.