One of the best parts of being a journal editor is talking with authors, reviewers, and readers. Face-to-face discussions with early career scientists are especially interesting and enlightening.
In June we gave “Getting Published” presentations to two groups at the Genetics Society of America’s (GSA) International Caenorhabditis elegans Meeting at UCLA. The attendees were mainly graduate students, peppered with some postdocs and a few principal investigators. A segment of our talk focused on numerous ways of understanding the short and long term influence, impact, and value of their research. We also encouraged them to write as scientific communicators with a story to tell.
After each session, we sat down with several attendees who wanted to chat. One graduate student working in Asia explained that a requirement for earning her Ph.D. is to publish a first-author paper in a journal with a Journal Impact Factor (JIF) greater than five. We asked whether she would consider other journals that might be below that arbitrary bar. She shrugged. With funding so competitive, her advisor feels he has no choice but to think of the JIF. Another student, also working in Asia, said she could publish in journals with JIFs below five, but earning her Ph.D. would require at least two papers in journals whose JIFs add up to at least five.
Disconcerting, but hardly a surprise, particularly for Asia. Publishing in high-impact journals is used by the Center for World-Class Universities (CWCU) in China as part of their ranking of world universities.1 Some institutions even directly tie the JIF to monetary rewards — which can exceed $30,000 — for (typically first) authors.2,3,4
The simple fact is that statistics and impact factors can make or break scientists vying for grants, career promotions, and doctorate degrees, no matter where they are located.5
“What’s your impact factor?” is often the first question we’re asked at conferences. “Why are you asking?” is our sincere response. The subsequent conversations about the pressure young scientists experience are both candid and disconcerting. Some include the JIF for each publication listed in their CV. Many are looking for a number above a certain threshold. Others seem sheepish: “I have to make sure...” followed by words like “because career” or “I'm still a grad student.”
For those unfamiliar with a journal or trying to place a journal in the pecking order, JIF is sometimes mistaken as a proxy for journal quality, journal prestige, article quality, and author prestige.
But as a measure of a journal’s quality, the JIF is limited. As a measure of a particular paper or author, it is meaningless. When it is used as a shortcut to determine whether or not an author will earn a Ph.D., be awarded a grant, or earn tenure, it's just plain ridiculous.
We realize that some authors submit papers to a journal regardless of its JIF, but most students – especially international ones – say they can’t ignore JIFs or (the more euphemistic) tier or prestige component. Unfortunately, this means that countless journals (including many well regarded society-sponsored ones) are off the table.
Are stories like these just outliers, merely anecdotes? How important is the JIF to the genetics community?
A recent GSA survey of the community revealed that the Journal Impact Factor is the #2 reason authors chose a journal. “Fit” — the ability to reach the right audience — is #1. But when an author was up for promotion or tenure, the JIF became the #1 factor in authors’ decisions of where to submit. The Nature Publishing Group/Palgrave MacMillan Author Insights 2015 Survey6 reported the JIF as “very important or quite important” to 90% of respondents when deciding where to submit a paper. The cohort of respondents from China rated journal reputation and impact factor higher than did the respondents from the rest of the world.
In fact, the reputation of the journal was the most important consideration to 97% of respondents. The top-reported component of a journal’s reputation? Its impact factor!
Our angst is no secret. We lament the misuse of this journal metric as a proxy for article quality, importance, and influence. We worry because the JIF is frequently used as an indicator of the impact of individual scientists and their work. Scientists, editors, and publishers take to blogs,7 journals,8 social media, and interview forums 9 to discuss the deficiencies, abuse and dire implications of the impact factor. Many have explained in well-articulated, thoughtful detail the ways the metric has warped science, from driving the topics of research to picking journal club articles.10 We rail that things must change, that we should ignore the impact factor, that all metrics are reductive, that it's become survival of the fittest.11
We wonder why the JIF is calculated to three significant decimal places, but then realize that besides making journals easier to rank, a number calculated to 1/1000 feels somehow more legitimate, more precise12 – more scientific – than a mere whole number. We are just plain sick of impact factors.13
Even Thomson Reuters, the company that calculates JIFs and publishes Journal Citation Reports®, has written cautionary notes explaining the JIF and its intended uses.14 Eugene Garfield, who in 1955 created the idea leading to the official impact factor a decade later, seems to realize his invention has gone rogue.15
OK, so we can’t blame the overuse, misuse, misinterpretation, and worship of the JIF on a lack of data, opinions, and analyses of the impact factor and its discontents. Then what’s keeping the JIF alive?
Yes, we have reasons to be optimistic. Many of us came together to sign the San Francisco Declaration on Research Assessment (DORA), which has significantly raised awareness of this issue. Alternative metrics (“altmetrics”) such as Mendeley, Altmetric, F1000, Impact Story, Plum Analytics, new types of publishing platforms, and others that we have surely missed here, seek to provide new, immediate data on how research is accessed, discussed, and used.
Diverse groups are working toward widespread refinement, acceptance, and use of complementary assessment methods and metrics. NISO is developing new standards for using alternative metrics to assess research outcomes.* The Leiden Manifesto for Research Metrics emphasizes myriad principles to guide research evaluation. In October, Bruce I. Hutchins and NIH colleagues posted on bioRxiv a white paper unveiling the Relative Citation Ratio (RCR), a promising new metric that examines article-level influence.16 A new format of the NIH biosketch17 directs the focus of reviewers toward researcher accomplishments. Editors propose practical reforms for curing Impact Factor Mania.18 Progress!
But the JIF continues to play an important — perhaps still the most important — role in authors’ choice of where to submit papers.
The 2014 JIFs were released in June. Our instincts tell us to ignore it. We want to ignore it. But many scientists worldwide don’t have the luxury of ignoring it. Not yet, anyway.
This year, we could tell those graduate students we talked with at the C. elegans meeting that yes, they could now submit their best work to GENETICS, because our JIF now exceeds their arbitrary threshold of five; this means that our journal is now in play for them. They were glad. We were glad. Then, we talked science.
Is GENETICS a “better” journal because its JIF crossed an arbitrary line in the sand? We don’t think so. But some people in a position to judge those students, and to judge others competing for grants and promotion — people who have great influence over the course of their careers — seem to think so.
GENETICS will stay true to its mission of offering a high-quality, selective, innovative platform for publishing our colleagues’ stories. We will continue to strive to be fair and prompt in our review process. We will keep helping authors increase the accessibility and intellectual impact of their paper in the short and long terms. We see ourselves as author advocates and as it turns out, so do many of our authors.
None of us can change the ecosystem ourselves, nor can it be changed quickly. None of us in science can, at this time, force hiring, promotion, and grant review committees, and those in the international community, to jettison the JIF. But we can encourage and embolden change from within.
If nothing else we wrote resonated with you, please remember this: behind the JIF data and analyses, behind the promotion, tenure, and grant review committees, behind the numbers and the relentless pressure to publish in prestige journals — stand real scientists with real stories and struggles. They are trying in earnest to do good science and to progress in their careers. Why must they also cross an arbitrary number line in the sand?
↵*Disclosure: TAD is a member of NISO Altmetrics Working Group A.
- Copyright © 2015 by the Genetics Society of America
Available freely online.