Yesterday I started out thinking that NHSTA's goal of creating more variation in their safety ratings was a good one. And I liked the idea of a simple overall rating.
In theory, I still think this is the case. But the more I look at the ratings themselves, the more disappointed I become.
At the core of my disappointment is that I apparently think about ratings very differently than NHSTA. The promotional materials produced by the agency make it clear that the primary objective of the new system was to push car companies to make “safer” vehicles. My focus would be to develop ratings that are meaningful and useful to consumers. (This would indirectly pressure car companies to make safer vehicles, but it would be an outcome, not an objective.)
It is not just a question of semantics.
Before a ratings system can be designed, the designer needs to understand how consumers define the issue. This is true of any type of rating, but let’s use safety as an example.
The term “safety” is going to mean different things to different vehicle buyers. A useful ratings system would understand this and reflect these differences. It would then—and this is the important part—incorporate individual, sufficiently granular ratings that provide a way to evaluate meaningful variations in performance against the different definitions.
This is important because consumers naturally assume that a rating reflects their definition of what makes a vehicle safe.
For example, safety can be broadly grouped into two categories. The first is passive safety: The likelihood to avoid injury in the event of an accident. The second is active safety: The likelihood of avoiding (or minimizing) an accident in the first place. Passive safety involves features such as air bags and crumple zones. Active safety features like ABS, traction control, four-wheel drive, and even potentially features like a heads-up display.
One example of how NHSTA hasn’t thought this through is that their new overall rating includes a blending of both their crash test scores (passive safety) AND a sampling of active safety technology. (The exact formula behind this overall score seems to be a state secret.)
Any time a rating is created that requires blends, weightings etc., it by definition reflects the values of the ratings designer. Most often this will not tie to the values of the ratings user. This is why it is critical for ratings to cover only single factors.
So it is clear that the NHSTA has defined safety for us—but in ways that may or may not be relevant (aka useful) to individual consumers. Of course, consumers may not realize this and could make flawed decisions using the NHSTA data.
Exacerbating this risk is that the new ratings—by design—increase the variations in scores, but these variations do not correlate to variations in real-world risk. (As I questioned yesterday, is the Camry 40% less safe than the Sonata?)
Bottom-line: NHSTA has created a system that will probably succeed in their goal of pushing car companies to build more vehicles they rate as safer. But it is more a context than a useful set of ratings. I would not recommend that consumers take them too seriously.
One final postscript: Most every I critism I leveled here at NHSTA regarding safety ratings could also be said about the EPA and their proposed single letter rating. With the additional note that it makes no sense to have one agency use stars for ratings and another use letters. Isn't this just needless additional confusion?