Product quality ratings: New research shows secret formulas yield questionable results

MIT Sloan Asst. Prof. Cynthia Rudin

Consumers rely on product quality rankings to make decisions about all sorts of things, from which graduate school to apply to, to which mutual funds to invest in, to which digital camera to buy. But in light of that fact that many independent rating companies score products using a formula kept strictly confidential, do these rankings provide an accurate measure of quality?

This question is at the heart of my latest research.* The vast majority of rating companies are unwilling to provide a clear-cut and transparent explanation for how they grade products. Most use a formula to score products, but the exact rating given to a product is kept as a trade secret. Not only is the formula kept under lock and key but the exact values of the factors that go into the formula are also not made public. This is, of course, the right of the rating company, but it’s problematic for consumers and companies alike. If the rating company won’t release the formula, it could potentially mean that the rating system isn’t truly a good measure of quality.

The recent global financial meltdown illustrates perfectly the pitfalls of poorly constructed rating systems. Most financial analysts will tell you that the overly rosy ratings from Moody’s and Standard & Poor’s helped create the economic meltdown by causing billions of dollars in losses to investors. Of the crisis, economist and Nobel Laureate Joseph Stiglitz said: “I view the rating agencies as one of the key culprits… The banks could not have done what they did without the complicity of the rating agencies.”

But even when the stakes are not as great as the fate of the global economy, a non-transparent, and potentially arbitrary rating scheme can cause difficulties. Take, for instance, the rating system in Hollywood. As discussed in the documentary This Film Is Not Yet Rated, there are vast disparities in how the Motion Picture Association of America (MPAA) claimed to the public it rates movies, versus the actual way in which these ratings were constructed. The inconsistencies caused problems for moviemakers, whose profits may depend on getting an “R” rating rather than an “NC-17” rating. It also caused problems for moviegoers, who use the ratings to help figure out whether they want to buy tickets to see the movie.

There must be a better way. After all, if companies were able to recreate the formulas for the quality rating models, they would better understand the criteria by which their products are being judged, which would potentially allow them to manufacture better products. Consumers would also benefit because they, too, would have a clearer understanding of how quality is being defined, and be able to make a determination about whether that definition aligns with their own.

With my colleagues Allison Chang, a graduate student at MIT’s Operations Research Center, and Michael Cavaretta, Robert Thomas, and Gloria Chou, executives at Ford Motor Company, I set out to reverse-engineer a secret formula for product quality rankings. We combined a decade’s worth of data from a nationally known rating company with information about the way these ranking models are commonly built, and designed a mathematical algorithm that reconstructed the formula.

There are many things that could make reverse-engineering difficult or impossible. First, if the rating company doesn’t tell you what factors go into the model, it’s much harder because then you have to guess. (The data we worked with did state which factors it considered, but it didn’t stipulate how these factors were weighted– that’s what we had to figure out.) Second, if there are a lot of factors that go into the model, and not very many products that have been rated in the past, then there are problems. You can’t pin the model down tight enough to be sure of your answer. Third, if the factors that go into the model are based on survey results—which are basically just people’s opinions—then again you have to conduct your own survey or guess the survey results, which is tricky. Finally, if the rating company keeps changing their ratings all the time, then they’re basically arbitrary and it’s impossible.

But if companies are working with generally solid data about quality rankings, it is possible to replicate our work. (We’ve made our algorithm publicly available.) The hard part is getting enough good data about products that have been rated in the past. If companies can get data and they care about a rating system similar to the one we worked with, our tools could definitely help them improve their ratings.

Our research also provides some exciting avenues for further research. For instance, it would be great if the algorithm could not only reverse-engineer the model, but also find errors in the data at the same time.

The bigger goal is of course is to encourage rating companies to make their formulas more transparent, which will make high rankings truly represent high quality, and that alone would help to provide better products to customers.

Cynthia Rudin is an assistant professor of statistics at MIT Sloan School of Management.

*How to Reverse-Engineer Quality Rankings by Allison Chang, Cynthia Rudin, Michael Cavaretta, Robert Thomas, and Gloria Chou

Read more in BloombergBusinessweek

Share your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>