We Have Met the Enemy, and It Is Us
Mark Johnston

MANY of us have long railed against the journal impact factor. We continue to rail with the publication of the San Francisco Declaration on Research Assessment1, signed by me and others, and endorsed by the Genetics Society of America (and other august groups).

We rail for good reason, because a metric that was primarily intended for use by librarians to evaluate the value of journals for their patrons has become a measure of individual scientists’ productivity, something for which it is wholly unsuited. A journal’s impact factor says nothing about any individual article published in that journal. This is most starkly illustrated by instances where only one highly cited article is responsible for most of a journal’s impact factor.2 How does the impact of that single article reflect on any other article published in the journal? Clearly, it does not.

Yet some scientists list in their CVs (and will tell you in conversation) the impact factor of each journal in which their articles were published. Surely they recognize the fallacy of that. Why do they do it?

Because they also recognize that hiring and promotion and grant evaluation committees put weight on the journal impact factor (claims to the contrary of people serving on those committees ring hollow). If you are an academic scientist, you undoubtedly have sat on such committees and heard your colleagues praise a candidate because of the several articles he or she has published in Cell (2011 impact factor = 32.403), Science (2011 impact factor = 31.201), or Nature (2011 impact factor = 36.280). We—the senior scientists who populate those committees that are so important to scientists’ careers (and to science!)—are responsible! If we didn't value so much the high impact factor journals, our (mostly junior) colleagues would not pay so much attention to the impact factor. The enemy is us!

Why do the high impact factor journals command so much influence? It's simply because they are the most selective. If an article gets published in one of those journals, we think it must be important and significant. After all, it was one of very few chosen for publication. The articles were stringently vetted and came out on top of a very big heap. And everybody wins because the members of the hiring and promotion and grant evaluation committees don't necessarily have to invest the considerable time it would take to read (and understand) the candidate’s work. What’s the problem?

The problem is: who did the vetting? Yes, the articles underwent peer review, so it's likely that knowledgeable, well-regarded, practicing scientists who are experts in the field judged the work. But who decided that the article was worthy of being peer-reviewed in the first place (a benefit enjoyed by only a few of the manuscripts submitted to the high impact factor journals)? Who ensured that the peer reviewers are knowledgeable, well-regarded scientists with relevant expertise? And who synthesized the comments of the peer reviewers into a decision on the significance, impact, and value of the work that resulted in its selection for publication in a high impact factor journal? In many cases, it was someone with little experience as a practicing scientist (and often with no experience as an independent investigator).

Is that a problem? I believe it is. Science is best governed by scientists, as it largely was in the biomedical sciences before the rise of the high impact factor journals. I believe decisions as important as who gets hired, promoted, and funded should be influenced by editors who have trekked the same trail as the authors, who know from hard-won experience what it takes to tell a significant story. Of course, experience does not ensure wisdom, but recent, relevant experience breeds sound judgment. It seems obvious that the endorsement of peers should carry more weight than the assent of administrators, but the hegemony of the high impact factor journals results in the opposite. Why do we endow inexperienced scientists with so much influence over such important decisions?3

What’s needed to correct this situation is nothing less than a change in our culture. Those of us sitting on hiring and promotion and grant review committees must evaluate our colleagues’ work for its content rather than its cloak. We must judge it ourselves and not cede our responsibility to others by automatically being impressed by the selectivity an article has realized. We—practicing scientists—must reclaim responsibility for setting the standards of our field.

And the junior scientists? Until the culture changes, they have little choice but to strive to get their work published in the high impact factor journals. Eventually those scientists will start to populate the hiring and promotion and grant review committees. I am hopeful they'll know what to do.

Only when we change the culture will the pernicious grip of the high impact factor journals (and the impact factor) be broken. But it won't happen unless each of us takes responsibility for setting the standards of our field—when we're serving on grant review panels, when we're sitting on hiring and promotion committees, in conversations with our colleagues in the hallway, and when we're in our labs considering journals for our next submission. Only when we effect a change in our culture can we stop railing against the impact factor.

Footnotes

  • 1 http://am.ascb.org/dora/

  • 2 For example, the impact factor of Acta Crystallographica-Section A increased more than 20-fold in one year—to 49.926—because of the publication of one highly cited review article (Annals of Library and Information Studies, Vol. 58, March 2011, pp. 87).

  • 3 I realize that the professional editors of Science and Cell have access to editorial boards composed of well-regarded practicing scientists. And Science—a journal sponsored by a venerable society of scientists—has always had an accomplished scientist as its Editor-in-Chief. And it is unrealistic to think that magazines like Science and Nature can be run only with practicing scientists. And I assert that professional editors have much to offer in the presentation of science. I just don't think we should relinquish to them our responsibility to set the standards of our field.

  • Received May 9, 2012.
  • Accepted May 23, 2013.

Available freely online through the author-supported open access option.