WHEN you crash your bike and bang up your body, do you go to the experienced orthopedic surgeon or the one just out of residency? When you crash your car and the other driver sues for negligence, do you prefer the advice of a senior attorney who has won a raft of cases or the recent law school graduate? When your hard drive crashes, do you take it to the computer specialist who has run the shop for 20 years or the assistant he hired a month ago? In matters large and small—whether a life-threatening injury, a threatening plaintiff, or a threat to your iTunes library—we typically value experience and demonstrated skill in those we turn to for help. This trust in experience and established ability usually serves us well.
But we scientists often turn this prescription for success upside down. If you are a graduate student who has written up the story that will get you your degree, or a postdoc seeking the publication that may land you your first job, or a young investigator seeking to publish the paper that will win you that big grant, who do you trust to decide whether the manuscript merits being sent out for review? Who do you depend on to arbitrate the reviewers’ opinions and decide whether your manuscript warrants revision or acceptance for publication? Who do you ask whether your response to the reviewers’ critiques meets the grade?
We often entrust those tasks to journal editors who have little experience and relatively scant achievement as scientists. We rely on these editors to recruit well-qualified, fair, responsible reviewers. We give them the responsibility for synthesizing reviewer opinions—which can sometimes be capricious or biased or overly demanding or uninformed—into a decision on whether the story meets the standards of the journal. We allow these editors to contribute to setting the standards of the field with each decision they make.
There is another option. The editors of many respected journals are accomplished scientists who are leaders in their fields and peers of the authors whose work they are appraising. In most cases they are chosen by their peers, often under the aegis of a scientific society that sponsors the journal. Well-regarded practicing scientists provide oversight of the journal’s editorial process and assure that the editors are qualified and able to authoritatively synthesize and fairly adjudicate the reviewers’ opinions.
Rather than merely tallying reviewers’ “votes,” peer-editors consider the reviewers’ comments and recommendations and are qualified to decide, for example, whether a suggested experiment should be a requirement for resubmission or would place an unreasonable burden on the authors. And peer-editors seldom have to go back to the reviewers for a decision on whether the revisions have plugged the holes in the story; peer-editors can decide that themselves. For the past 6 years I have had the privilege of observing how the peer-editors of GENETICS carry out their responsibility to authors. I have watched them synthesize and adjudicate reviewers’ recommendations fairly and thoughtfully, providing authors with clear decisions and helpful feedback while working with them toward the goal of telling significant, well-crafted stories that will have maximum, long-lasting impact. (For a description of the principles and practices of peer-editing, see Genetics 192: 761–762 (PMID: 23135323)).
There is a glaring paradox here: editors with the least experience and little demonstrated ability as scientists preside over most of the top-tier journals popular with authors (and with many of the committees that make important decisions about authors’ careers). In contrast, most of the journals edited by experienced, accomplished scientists are second (maybe even third) tier, as measured by the widely discredited yet surprisingly influential Journal Impact Factor (JIF). Why?
It's because we send our best stories—the ones likely to have the highest impact on the field—to the top-tier journals. We send our best stories to those journals because they have high JIFs, and because we believe that those journals offer our stories the most visibility, and because we know they are the most selective (which they can be since we send them our best stories). But perhaps most importantly, we want our stories to be published in those journals because we recognize that many members of faculty hiring-and-promotion committees and grant review panels instinctively regard papers published in them as important.
The extent to which the JIF is used (explicitly or implicitly) to judge journals’ appeal and scientists’ productivity underscores the need for a metric to evaluate a journal’s influence. Why not use a metric that reflects the scientific standing of the editors and thus comes closer to indicating what should be the true value of the journal’s imprimatur? We could use the average h-index1 of a journal’s editors as one measure of their experience and scientific stature and therefore of their authority to judge and validate authors’ findings. We might call it the Editors’ Qualification Index; I’ll call it the Journal Authority Factor (JAF). Let's compare this metric with the JIF.
If authors choose venues for publication of their best work based on journals’ JAFs the validation they receive from surviving peer review will be based on the judgment of editors who are proven scientists, which should make it more meaningful. And peer-edited journals would move into the top tier.
Why do we send so many of our best stories to journals whose editors are not accomplished, experienced, practicing scientists? Why do we give professional editors of journals that are not directly responsible to our community the authority to set the standards of our fields by deciding what gets published in top-tier journals? Most importantly, why do so many people serving on faculty hiring-and-promotion committees and grant review panels give these editors so much influence over who gets hired, promoted, and funded? Wouldn't we, and science, be better served if we entrusted our best stories to journals with peer-editors whose authority is well founded, who have earned the respect of their peers, who are qualified to set the standards of the field? Let's use the JAF to help us evaluate journals’ importance and the significance of our colleagues’ stories. We owe it to ourselves.
Footnotes
Available freely online.
↵1Hirsch, J. E., 2005 An index to quantify an individual's scientific research output. Proc. Natl. Acad. Sci. USA 102(46): 16569–16572.
A person with an h-index of n has published n papers, each of which has been cited at least n times. For example, my h-index is 59 because I am listed as an author on 59 papers that have each been cited at least 59 times. When the 60th most cited paper on my list of publications, which has been cited 59 times, accrues one more citation, my h-index will be 60.
- Copyright © 2015 by the Genetics Society of America
Available freely online.