Letters to the Editor  |   April 2013
Impact Indices Shed False Light
Author Affiliations
  • Richard Guadalupe McDonald, PhD
    Global Health Science Institute, Springville, Utah
Article Information
Evidence-Based Medicine
Letters to the Editor   |   April 2013
Impact Indices Shed False Light
The Journal of the American Osteopathic Association, April 2013, Vol. 113, 268-270. doi:10.7556/jaoa.2013.113.4.268
The Journal of the American Osteopathic Association, April 2013, Vol. 113, 268-270. doi:10.7556/jaoa.2013.113.4.268
To the Editor: 
The article on bibliometric measures published in the November 2012 issue of The Journal of the American Osteopathic Association1 brings up a point beyond what has been known in the “publish or perish” academic world (ie, more publications can mean more funding from the National Institutes of Health). The point being, impact indices2 (also known as impact factors) supposedly determine the quality of publications. A misconception exists that the higher the impact index is for a publication, the higher the perceived quality of that publication. Here is why. 
Impact indices are like advertising awards. Advertising agencies find it difficult—if not impossible—to measure the effectiveness of their clients' advertising campaigns.3 An easy method was developed to measure the perceived financial success of an advertising campaign; that method was advertising awards. The awards, however, are not the result of a measured financial success for the client, only a perceived success for agency boasting.4 As one marketing director put it, “Ignorance is bliss in the big advertising agencies. Showing off is confused with selling. The golden price is an award, not a sale.”5 Advertising awards leave a glaring gap in advertising agencies' true financial impact for their clients. Like the advertising community, the medical community found it difficult—if not impossible—to measure the impact of a medical publication. To measure a publication's perceived impact, an algorithm was created to determine how many times an article was referenced—the impact index. 
Medical literature should provide the intellectual discourse that advances medicine. Physicians who practice medicine full time most likely do not look at impact indices to alter their practice habits. If anything, I believe editorials and similar content6,7 can provide more of an avenue for medical debates than can published articles. 
Impact indices leave a glaring gap in measuring the true impact of a medical publication on the real world. Impact indices can lead to referencing articles for reasons beyond article relevancy—referred to here as the referencing game. 
Several arguments can be made for why the referencing game is harmful for medicine. These arguments represent the human side of research that is never discussed. First, the referencing game creates an atmosphere in which researchers will never reference their key competitors' work in a publication. Second, a well-established researcher will never reference a novel article from a young researcher or one he or she perceives to be of no significance. No one will admit publicly to these actions, but these first and second reasons can be validated by tracking research on BioMedLib ( BioMedLib can be used to track research by means of specific topic algorithms (eg, techniques, procedures) of who published what first and who followed them. BioMedLib, however, has a flaw. If parallel publications exist (ie, same research by different groups where neither group referenced the other), BioMedLib's algorithm cannot determine who was first. From experience in the field, one should be able to discern the original research group from the copying group who did not properly reference the original group—similar to a physician's experience of when to use clinical judgment vs evidenced-based medicine (ie, the physician using evidence-based medicine is “following operating manuals containing preset guidelines, like factory blueprints…all necessarily reflect the values and preferences of the experts who write the recommendations”8) for a patient. 
Third, in my experience articles often are referenced because of already established connections (ie, personal, scientific, or political) or because a researcher wants to establish a connection. Unfortunately, impact indices and BioMedLib cannot detect these activities. 
Fourth, this referencing practice leads to bandwagon referencing, in which referencing occurs because “everyone” is referencing the publication. Bandwagon referencing is nothing new. In a conversation I had with a prominent researcher many years ago, the researcher informed me that when he inquired about the specifics of a widely referenced biomedical study written in Russian, he could not find one researcher who could tell him what was in the article even though the researchers referenced the study in their articles. He eventually found a graduate student in Russian literature with a science background to translate it for him to ensure that what he would be referencing was accurate. This bad habit continues today; in highly referenced articles, problems are still overlooked by many (eg, conclusions that were not supported or that were directly contradicted by the data9). 
Fifth, the referencing game contributes to the lack of innovation because it may be a distraction to innovation. In my experience, more researchers are focused on an agenda rather than on the medicine or science they are purportedly investigating. 
Sixth, it creates a false reference library and contributes to medical noise. Medical noise is research that takes one into the wrong direction because one is following a group or funds. This false reference library is similar to studies showing authorship issues, where a substantial percentage of authors in a published article do not meet the journal's authorship guidelines.10 
And seventh, the referencing game is especially hurtful to student-researchers and emerging physicians because it is not uncommon for student-researchers and emerging physician-researchers to reference articles they have never read or used in research. This practice is similar to “name dropping” to get a better table at a restaurant, except in this case the purpose of the practice is to “strengthen” the current research even though the researchers most likely never read the article. 
These 7 arguments against use of the referencing game are similar to arguments that could be leveled at a website editor who creates fake incoming links for the sole purpose of acquiring a better page ranking in Google, or a writer, a writer's friends, or a business buying back their books to increase the sales of their books and thus get on the New York Times or other bestseller lists.11,12 
Van Noorden13 described 3 different tools used to measure the top 10 Nature articles, and each of those tools (one of which was the number of citations collected by Web of Science, which compiles impact factors) came up with a different set of top 10 articles in 2012. Thus, I believe impact indices should be used only for bragging rights, like showing off winning an advertising award, and not for determining future funding or the value of a publication. BioMedLib should be used to identify innovators vs imitators. Advertising awards may be useless to the clients of the advertising agencies, but impact indices are a bandwagon harmful to society. 
Suminski RR, Hendrix D, May LE, Wasserman JA, Guillory VJ. Bibliometric measures and National Institutes of Health funding at colleges of osteopathic medicine, 2006-2010. J Am Osteopath Assoc. 2012;112(11):716-724. Accessed December 28, 2012.
The Thompson Reuters impact factor. Thompson Reuters website. Accessed February 15, 2013.
Rust RT, Ambler T, Carpenter GS, Kumar V, Srivastava RK. Measuring marketing productivity: current knowledge and future directions. J Marketing. 2004;68(4):76-89. [CrossRef]
Advertising agency awards. The Garrigan Lyman Group website. Accessed December 1, 2012.
Ah! the gentle swish of creative masturbation: but why should you care? Drayton Bird Blog. June 20 , 2012. Accessed January 2, 2013.
Coyne J. Questioning whether psychotherapy and support groups extend the lives of cancer patients. Science-Based Medicine. August 31 , 2012. Accessed November 20, 2012.
Catherine DeAngelis and JAMA: what is going on here? Science Blogs. March 24 , 2009. Accessed November 20, 2012.
Hartzband P, Groopman J. The new language of medicine. N Engl J Med. 2011;365(15):1372-1373. [CrossRef] [PubMed]
Saunders T. Post publication peer review: blogs vs letters to the editor. Science of Blogging. July 25 , 2011. Assessed December 28, 2012.
Acín F. De todos ellos… ¿quiénes son los autores?, ¿cuál fue su contribución? Angiología. 2007;59(4):285-288. Accessed October 15, 2012.
Spitznagel E. How to write a bestseller business book. Bloomberg Businessweek. July 19 , 2012. Accessed December 31, 2012.
How to get your book to #1 on Amazon. Gentle Rain Marketing LLC website. January 2 , 2013. Accessed January 2, 2013.
Van Noorden R. What were the top papers of 2012 on social media? December 21 , 2012. Nature News Blog website. Accessed January 6, 2013.