I received many emails about the "management" rankings. I have to admit rankings, while personally pleasing, are actually complex social phenomenon. Americans love to rank things, from scariest movies, to greatest rock songs of all time, to buisness schools and wines. Since I know a little about business schools, let me describe the impact rankings have had.
There are many who have argued that the ranking business schools has proven to be a significant problem and others who argued that it has taken something in the shadows and cast sunshine on it. The critics charge that students are no
longer being educated and teachers are no longer teaching; instead, they buy
and sell rankings. A school ranked in the top five will have a higher yield rate
than one ranked in the top ten. As a
result, a lot of time and energy is invested into making sure that the ratings’
criteria are being met.
What
explains the prestige, power, and duplication of business school rankings? Not
very often, however, are the rankings thought about in the abstract, as an idea
about how they impact business education.
Rankings are also attractive
because they are simplistic and they convey a sense of order. By virtue of quantification,
rankings promise comparability and standardization which are partially achieved
by forcing those being ranked to be more
judicious in standardizing their own internal processes. The rankings in
outlets like U.S. News and World Report
and BusinessWeek forced schools to
standardize their measures and adopt particular definitions of particular data.
Consequently, most schools now share a common definition of starting salary,
acceptance rate, and employment, thereby allowing for common metrics which transcend
institutional differences.
By reducing what economists call
“search costs,”rankings make accessing and evaluating important information
easier. Like the maximizing shareholder price, rankings mechanize the decision
making process by, in part, marginalizing the elements that resist
quantification while highlighting those that are particularly amenable.
Yet, rankings provide the illusion of scientific rigor vis-à-vis a
process that actually calls for careful judgment and nuanced interpretation. It is one
thing to give Wharton, Tuck, or
Consider,
for purposes of illustration another complex product, wine. Robert Parker has developed a popular 100-point scale for rating a
complex product like wine. I don't know much about wine, but I buy it on ocassion. When one
walks into a wine store and looks at the ratings, it is not uncommon to ask oneself
is there really a meaningful qualitative difference between a wine rated 90 on
the one hand, and a wine rated 91 on the other. Perhaps not? My uncle who owns a wine store tells me that the ratings system helped expand the market for wines in
So, this raises a broader question. What kind of knowledge does ranking management thinkers produce?
I have just came across your Blog and I like your comparison of rankings and wine scores. I work for an academic journal publisher in the UK, and we face an uphill battle on several fronts because of rankings - because we not not in the US where the 'main' business schools are, because of ISI, because we publish 'international' journals. Rankings are a case of the tail wagging the dog, it is is hard to get any sort of message across about quality outside of these rankings. There are moves afoot in Europe to put together its own ISI-type ranking, however this is another example of a metric defining something which simply may not be definable.
Anyway, it is good that someone at HBS is thinking about this sort of stuff, and I look forward to reading more of your thoughts now you are on my Blogroll.
PS Have just read the Houellebecq book myself, which is called 'Atomised' in the UK - loved it!
Posted by: ISI Mania | April 10, 2006 at 10:09 AM
Great post---and by the way, you were great on the Tom Ashbrook show! I can see how your concern about "winner-take-all" CEO markets could spill over to education rankings...I think Robert Frank compares these phenomena in The Winner Take All Society.
The nuance you are calling for here imposes some costs--namely, people can't easily peg a school to a rank. I wonder whether the "tiers" or "terracing" of rank you call for happens already among sophisticated observers. Of course, even if it does, the real audience for the rankings are the unsophisticated...so that's a worry.
I look at some of the perils of ranking by search engines here:
http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=375637
and the citations to Stake in that piece should point in the direction of an Indiana L.J. symposium that raised very interesting questions about law school ranking.
Posted by: Frank | April 24, 2006 at 03:41 PM