We have been working on an advanced decision tool for the new site. (The tool launches early next year.) Key to this tool delivering useful results is the availability of data sets that cover vehicle ratings such as performance, safety, etc.
Superficially these attributes seem like simple concepts, but when looking deeper it is clear this is not the case. Take safety. How do consumers intuitively define safety?
Probably in two ways: One could be the likelihood to survive an accident. (Passive safety.) The other could be the likelihood of avoiding an accident. (Active safety.)
So the first complexity is that no single rating easily captures both concepts.
Focusing on passive safety, there are two widely-used existing safety ratings: IIHS and NHSTA--both based on crash tests.
Our analysts looked at both, assuming that as both used roughly the same methodology, the conclusions would be similar. In other words, that the results from one could predict the results of the other.
Turns out this is not so.
NHSTA and IIHS crash test scores only have about a 20% correlation--very low.
The big reason for this is that the NHTSA scores are all very close together. With little variation, there is not much to correlate against.
NHTSA has been running crash tests for years. The car companies have had plenty of time to figure out which design features will results in higher scores. More importantly, vehicles are generally much safer as well, generating higher scores on the dated tests.
This lack of variation is probably one reason why NHSTA today announced new testing procedures and a new overall rating. (One disappointment is that they made no attempt to connect the new system with ratings for vehicles prior to 2011.)
New ratings with more variation should be a good thing right?
Well, maybe. Let's think like a consumer for a minute. Consumers don't want to wade through mountains of data. This means an overall rating is a good idea.
They generally think about safety in one of two ways: The first is "How likely will I be to experience an accident?" (Active safety.) The second is "How likely will I be to survive an accident?” (Passive safety.)
NHTSA talks about active safety features in today's press release, but the ratings system is largely focused on crash tests. (Passive safety.)
To be fair, the overall ratings do have some minimum standards for active safety features--like stability control--but this actually highlights the problem. What actually should a consumer infer from the new ratings?
For example, the Toyota Camry gets an overall rating of 3 stars. The Hyundai Sonata gets an overall rating of 5 stars. Hyundai is probably shooting new commercials touting this achievement as you read this. But, again, what does it really mean?
Is the Camry 40% less safe than the Sonata? Specifically, can a consumer expect to be 40% less likely to survive an accident in a Camry?
I looked at IIHS crash test data for 2011 as a comparison. The Sonata received their top score (Good). They haven’t rated the Camry, so I looked at the related Avalon--which also received a top score.
So this is the question: Does the new NHSTA rating system reflect meaningful real world differences in vehicle safety?
We are conducting some analysis to develop a definitive answer, but our initial assessment is no.
In this case it is likely that creating more variation for its own sake resulted in greater confusion, not clarity.
You can find NHSTA's press release here.