Malcolm Gladwell does not like college rankings. In fact, he abhors them.
The bestselling author and New Yorker writer turns his attention to the subject of rankings in the latest edition of The New Yorker. The result of his exploration into the highly controversial topic: a scathing six-page critique of the value and intellectual honesty of all ratings of universities. Gladwell, known for using more subtlety and finesse in writing about topics and issues, takes out a sledgehammer to beat up U.S. News in particular and all rankings in general. “Who comes out on top, in any ranking system, is really about who is doing the ranking,” concludes the well-known author and staff writer at The New Yorker.
It’s something we at Poets&Quants have been saying ever since we launched six months ago (see “Turning the Tables: Ranking the Rankings”). And as far as B-school rankings go, U.S. News is far more transparent than most and it measures factors that are more directly related to program quality than rankings by either The Financial Times or The Economist. U.S. News, for example, uses average GMAT and GPA scores to assess the quality of incoming students and post-MBA compensation and employment data to assess the opportunities an MBA program affords its graduates.
Writing in the Feb. 14 & 21st issue of The New Yorker under the headline, “The Order of Things: What College Rankings Really Tell Us,” Gladwell takes a deep dive into the college rankings published by U.S. News & World Report. He doesn’t specifically address business school rankings by U.S. News or any other media outlet. But his conclusions are just as applicable to the B-school ranking game as they are to U.S. News’ general college rankings.
His major conclusions:
Determinations of ‘quality’ turn on relatively arbitrary judgments about how much different variables should be weighted.
Asking deans and MBA program directors to rate other schools is less a measure of a school’s reputation than it is a collection of prejudices partly based on the self-fulfilling prophecy of U.S. News’ own rankings.
It’s extremely difficult to measure the variable you want to rank because of differences in how a specific metric can be compiled by different people at different schools in different countries.
Rankings turn out to be full of ‘implicit ideological choices’ by the editorial chef who cooks up the ranking’s methodology.
There is no right answer to how much weight a ranking system should give to any one variable.
“The first difficulty with rankings is that it can be surprisingly hard to measure the variable you want to rank,” writes Gladwell, “even in cases where that variable seems perfectly objective.”
The writer then trots out a ranking of suicides per hundred thousand people, by country, to prove his point. At the top of the list, with what would seem the highest suicide rate of any country is Belarus where 35.1 people out of every 100,000 deaths are declared suicides. At the bottom of this top ten list is Sri Lanka where 21.6 people out of every 100,000 deaths are determined suicides.
“This list looks straightforward,” believes Gladwell. “Yet no self-respecting epidemiologist would look at it and conclude that Belarus has the worst suicide rate in the world. Measuring suicide is just too tricky. It requires someone to make a surmise about the intentions of the deceased at the time of death. In some cases, that’s easy. In most cases, there’s ambiguity, and different coroners and different cultures vary widely in the way they choose to interpret that ambiguity.”
‘U.S. NEWS RANKINGS SUFFER FROM A SERIOUS CASE OF THE SUICIDE PROBLEM.’
What does this have to do with college rankings? “The U.S. News rankings suffer from a serious case of the suicide problem,” believes Gladwell. “There’s no direct way to measure the quality of an institution—how well a college manages to inform, inspire, and challenge its students. So the U.S. News algorithm relies instead on proxies for quality—and the proxies for educational quality turn out to be flimsy at best.”
Consider the most important variable in the U.S. News methodology—accounting for 22.5% of the final score for a college: reputation. In U.S. News’ business school ranking, it’s the single most important metric as well—given a weight of 25% of the final ranking for a B-school (the next most important variable in the B-school survey is its highly flawed survey of corporate recruiters which accounts for 15% of the final ranking). This so-called “peer assessment score” comes from a survey of business school deans and directors of accredited master’s programs in business. They are asked to rate programs on a scale from “marginal” (1) to “outstanding” (5). A school’s score is the average of all the respondents who rate it. About 44% of the deans and directors surveyed by U.S. Newsresponded to it in the fall of 2009.
Writes Gladwell about the “Best Colleges” version of the survey: “Every year, the magazine sends a survey to the country’s university and college presidents, provosts, and admissions deans asking them to grade all the schools in their category on a scale of one to five. Those at national universities, for example, are asked to rank all 261 other national universities.” The magazine’s rankings editor, Robert Morse told Gladwell that the typical respondent assigns grades to roughly half of the schools in his or her category.
‘THE U.S. NEWS RATINGS ARE A SELF-FULFILLING PROPHECY.’
“But it’s far from clear how any one individual could have insight into that many institutions,” concludes Gladwell. “Sound judgments of educational quality have to be based on specific, hard-to-
observe features. But reputational ratings are simply inferences from broad, readily observable features of an institution’s identity, such as its history, its prominence in the media, or the elegance of its architecture. They are prejudices.”
As an example to prove his point, Gladwell cites an analysis of another ranking by U.S. News, its rating of the Best Hospitals which also rely heavily on reputation scores generated by professional peers. “Why, after all, should a gastroenterologist at the Ochsner Medical Center, in New Orleans, have any specific insight into the performance of the gastroenterology department at Mass General, in Boston, or even, for that matter, have anything more than an anecdotal impression of the gastroenterology department down the road at some hospital in Baton Rouge?…
“When U.S. News asks a university president to perform the impossible task of assessing the relative merits of dozens of institutions he knows nothing about, he relies on the only source of detailed information at his disposal that assesses the relative merits of dozens of institutions he knows nothing about: U.S. News. A school like Penn State, then, can do little to improve its position. To go higher than 47th, it needs a better reputation score, and to get a better reputation score it needs to be higher than 47th. The U.S. News ratings are a self-fulfilling prophecy.”
Gladwell may not have known that the most egregious example of this problem plagues U.S. News’ so-called specialty business schools rankings because they are based solely on the survey of business school deans and directors who are asked to nominate up to 10 programs for excellence in each of a dozen or so categories, from accounting to management. The schools that get the most votes make the ranking.
U.S. News’ Morse, director of data research, counters this criticism. In response to a reporter, Morse once said: “U.S. News is not expecting people to have knowledge or be able to rate each school in its category. It’s based on the premise that since we have a big enough respondent base, enough people have some knowledge of enough schools that we get a statistically significant number of respondents for each school. There are subjective parts of education, parts that can’t be measured by just quantitative data. The peer survey tries to capture that part of it.”
One of the techniques Gladwell deploys in the article is to reorder certain schools by changing some of the metrics used to rank them. He revises a top ten list of schools by doing what U.S. News doesn’t—including the cost of tuition as a variable. U.S. News would include cost if value for the dollar was something it judged important. Simply taking cost into account, the rankings of seven of the top schools changed and three schools—Northwestern University, Columbia University and Cornell University–dropped out of the top ten, replaced by the University of Alabama, the University of Texas, and the University of Virginia. Concludes Gladwell: “The U.S. News rankings turn out to be full of these kinds of implicit ideological choices.”作者: UocBooth 时间: 2011-3-23 22:36
agree!