Article importance: Processes to evaluate the success of published articles are subjective ========================================================================================== * Vivian C. McAlister Despite the development of protocols to improve medical research, the determination of a research report’s importance still relies on intuition as much as adherence to guidelines.1 It is a problem that affects articles before and after publication. No editor wants to inappropriately reject a report, such as that submitted by Bruce Glick and Timothy Chang, postgraduate students at Ohio State University, to *Science* regarding a mix-up in the laboratory. Chang mistakenly used Glick’s chickens, whose bursae of Fabricius had been removed by Glick, to teach students how to raise antibodies. They discovered the role of the bursa in antibody production. The editor at *Science* did not fault the methodology, but rejected the submission on the grounds that their findings would have insufficient interest for their readers. Foolishly, they heeded his poor advice and published in *Poultry Science*.2 Two decades ago, several top-tier journals developed policies for accelerated publication of works judged to be of special importance. Although authors and peer reviewers could suggest papers for accelerated publication, editors remained the effective arbiters of special importance. It was a skill and responsibility begging to be tested. William Ghali from the University of Calgary and colleagues from around Canada and Switzerland took up the challenge.3 In a scrupulously designed experiment, they asked 42 experts from around the world to score, in five domains, 12 articles that had been judged to have special importance and 12 regularly published controls that had been selected based on journal, disease or procedure of focus, theme area and year of publication. Despite a mean score that was slightly, but statistically significantly, higher in the special article group, control articles were chosen over case articles in five of the 12 pairs, leading the authors to determine that the selection process was “inconsistent.” An editor from one of the journals wrote that in the interval between the acceptance of the paper by Ghali and colleagues and its publication, case articles had been cited twice as much as control articles, justifying their selection.4 Citation rates seem like an objective measure of an article’s importance. Impact factor (IF) is the considered the best measure of a journal’s success. The IF, a proprietary calculation belonging to Clarivate Analytics, is calculated as the number of citations in a calendar year to articles published by the journal in the two previous years divided by the number of citable articles published in the two previous years. The IF for *CJS* has been slowly increasing from a low of 0.5 in 2006 to 1.9 in 2016. The two-decade interval since publication of the articles in Ghali’s experiment allows us to see how the editors’ selected articles fared compared with the experts’ matched controls (Table 1). The mean citation rate for selected papers is now three times that for controls. In only two instances control papers outperformed the selection they matched. A huge variation in citation rate can be seen between papers, and it is greater than the difference between matched pairs. No correlation between expert score and number of citations is apparent (Spearman rho = 0.36, *p* = 0.08). Overshadowing signals appear to emerge from the data. The case and control papers are matched for quality, which is reflected in the citation rate. The advantage enjoyed by the selected papers may justify editorial judgment, or it may reflect priority given in publication. What is remarkable is the range of citation rates, which seems to reflect the newsworthiness of the content: the lowest cited pair concerned an issue that affects the Third World, and the highest cited article was about heart problems related to a diet pill.3 View this table: [Table 1](http://canjsurg.ca/content/61/2/76/T1) Table 1 Expert evaluation versus citation rate for articles chosen by journal editors for expedited publication compared with matched articles regularly published In 1955, the editor at *Science* did not think that chicken research would cut it with the journal’s audience. He did not realize that it was the beginning of a new age of immunology, the benefits of which we still feel today. Glick and Chang went their separate ways, both confirming their discovery in supplementary experiments. Although their paper remains the highest cited paper of *Poultry Science*, their priority in discovering the basis of humoural immunity was never acknowledged. Glick died in 2009, remembered fondly by countless undergraduate students and 29 graduate students as an exemplary mentor. Although *Science* did accept, in 1969, one of the more than 200 scientific papers that Glick wrote, its rejection in 1955 cost him a Nobel prize. André Picard of the *Globe and Mail* provided a medical journalist’s perspective of the debate regarding editorial selection, claiming that the issue was “not strictly an academic one.”5 Perhaps journalists will applaud the embrace of social media made by academic journals. Like most other journals, *CJS* uses social media platforms to promote authors’ work. Altmetrics is a new tool to determine how successful the journal is in this endeavour. In his thoughtful critique of processes that determine importance of research in both the mainstream press and the world of academic publishing, Picard acknowledged that all “media look for controversy.”5 It is clear that controversy and newsworthiness played a substantial role in the number of citations generated by the papers tested two decades ago. We must be very careful to avoid compounding the bias when the full force of social media is applied. ## Footnotes * The views expressed in this editorial are those of the author and do not necessarily reflect the position of the Canadian Medical Association or its subsidiaries. * **Competing interests:** None declared. ## References 1. McAlister V.Toward a “New School” of surgical research.Can J Surg 2017;60:220 2. Glick B, Chang TS, Jaap RGPoult Sci 1956;35:224–5. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.3382/ps.0350224&link_type=DOI) 3. Ghali WA, Cornuz J, McAlister FA, et al.Accelerated publication versus usual publication in 2 leading medical journals.CMAJ 2002;166:1137–43. [Abstract/FREE Full Text](http://canjsurg.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiY21haiI7czo1OiJyZXNpZCI7czoxMDoiMTY2LzkvMTEzNyI7czo0OiJhdG9tIjtzOjE3OiIvY2pzLzYxLzIvNzYuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 4. Stanbrook MB.Not fast enough?.CMAJ 2002;167:738 [FREE Full Text](http://canjsurg.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiY21haiI7czo1OiJyZXNpZCI7czo5OiIxNjcvNy83MzgiO3M6NDoiYXRvbSI7czoxNzoiL2Nqcy82MS8yLzc2LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 5. Picard A.Getting on track: how scientific journals and mainstream journalists could do a better job of communicating with the public.CMAJ 2002;166:1153–4. [FREE Full Text](http://canjsurg.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiY21haiI7czo1OiJyZXNpZCI7czoxMDoiMTY2LzkvMTE1MyI7czo0OiJhdG9tIjtzOjE3OiIvY2pzLzYxLzIvNzYuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9)