My inner media relations/lisstmaking nerd was intrigued by a recent Chronicle of Higher Education article (subscription required) about the Faculty Media Impact Project. The brainchild of anthropology professor Rob Borofsky, this project aims to rank how well university researchers share their research with the public via the news media.
It’s an interesting twist on an old problem in media relations: How do we qualify the impact of media relations?
Unfortunately, the Faculty Media Impact Project doesn’t answer that question. But it does offer yet another perspective on the issue of measuring media relations.
In one sense, Borofsky’s effort is narrow in scope. It examines the media mentions of social sciences faculty from 94 universities. Disciplines outside the social sciences disciplines of anthropology, economics, political science, psychology or sociology were not part of this research. (Borofsky deliberately restricted his research to social science researchers because, as he told the Chronicle, “people dealing with social sciences should be dealing with social concerns.” True, but the same could be said of any academic discipline, I think.)
But even with this narrow focus, the research involved sifting through a lot of data. “To devise the rankings,” the Chronicle reports, “researchers ran searches of the Google News archive to find out how often more than 12,700 faculty members had appeared in 6,000 news sources from 2006 to 2011. The citations for the professors in each department were tallied, averaged on a per-faculty member basis, and then ranked relative to the federal funds their programs had received.” (The rankings are available from this website. Rice University tops the chart, with Southern Methodist and MIT rounding out the top three.)
The assumption behind Borofsky’s methodology is that those faculty who are cited most frequently in the news are somehow “reaching out” to the public to share their expertise. But as the Chronicle report points out, that assumption — and this ranking system, like all others — has its flaws.
For one, some researchers may benefit from whatever may be current in the news. For example, many political science professors in the study had high media citation scores, perhaps because their governor was running for president during much of the time period covered by the study, the Chronicle article suggests. So universities with strong poli-sci programs may skew the results.
The study also lumps all institutions within a single university system together and doesn’t take into account the type of media market where a university is located. So the University of California’s 10 campuses are all lumped together, even though professors at Berkeley probably get more media mentions than, say, Riverside.
I think it would be interesting to expand the study beyond traditional news media sources and consider non-traditional sources, such as blogs and social media mentions.
Also, a lack of coverage in the news media doesn’t necessarily mean faculty are not engaged in social concerns. The media don’t always devote the most space to social issues, anyway.
Whatever happens with this project, it’s likely to suffer the same fate as every ranking ever created: Loved by those who fare well in the rankings, and hated by those who do not fare so well. As Borofsky tells the Chronicle, “Everyone complains about rankings but uses them.”