Numbers Chasing Sullies Science
It seems that the phenomenon of scholars whoring themselves for numbers is gaining more widespread attention.
My colleague, Dr. Matthew Nisbet, just forwarded an interesting article from the Wall Street Journal regarding the active manipulation of journal impact factors. Most people outside of academe are familiar with the "publish or perish" label. Although I find this characterization to be terribly misleading, the need to demonstrate an active research program does come with drawbacks.
As professors at research universities, we are expected to publish the results of our research in academic journals. In order to have one's work published, it is "blind" reviewed by other "peer" researchers in the area. Since research specializations are often quite small, the degree to which these reviews are actually blind varies. That is, I know who does what in studying emotional, attention, and media.
Some journals are better than others. However, calling one journal "better" than another is somewhat akin to calling chocolate ice cream better than vanilla ice cream. So there are quantitative indicators. One of these indicators is the impact factor. This statistic is kept by ISI Thompson, and it provides an index of how often a journal is referenced by other journal articles.
The thought is that if your work is important, then other people will cite it. The more that work in a particular journal is cited, the higher its impact factor climbs. Some journals are not tracked by ISI. Thus, they have no impact factors. They are the lepers of science.
Keep in mind that I am a quantitative scientist. All of my research involves numbers. I love numbers. In this respect, I am like the Count from Sesame Street. "I love to count. Ah Ah Ah." However, when you introduce quantitative indicators to science, you immediately pervert the process.
The Wall Street Journal article (Begley, 2006) provides evidence that some journals are actively trying to manipulate their impact factors. That is, after an article is basically accepted, the WSJ reports that the American Journal of Respiratory and Critical Care Medicine asks every author to cite more papers from that journal before publication.
This is as blatant of a manipulation of the process as I can imagine. But impact factor perversion is just the tip of the iceberg. As I have written before, numbers chasing threatens science at every level.
Take the NFL sack record as an analogy. New York Giants defensive end Michael Strahan set the all-time single season sack record in January 2002 when he took down Green Bay Packers quarterback Brett Favre in the fourth quarter. But the play looked suspicious. It looked like a "gimme." Everybody in that stadium knew Strahan needed one sack for the record, and the takedown looked as if it could have been completed by a punter. Favre denied handing Strahan the sack, but few people who see the footage believe it. The record is tainted.
Allow me to give you a few more examples of how we are counting ourselves stupid in American science.
The very notion of impact factors has, I argue, a chilling effect. It is difficult to get truly "new" work published. For instance, I have two papers that won top paper awards from an academic society that are awaiting journal homes. They are difficult to publish. They are new. They are not trendy. They do no fit high impact journals. So they are published in a journal also known as my desk drawer.
So you run a risk as a young scholar. Ground-breaking work runs the risk of slow or no acceptance. Better to tow the line. Do menial work and cite the big names in the field. Imagine if all of science behaved this way.
Budding linguist Noam Chomsky published Syntactic structures in 1957. Chomsky has gone on to be one of the most cited scholars of his generation, and no one can deny the influence of the 1957 volume on modern linguistics. But Syntactic structures was not published by the biggest or best publisher. Instead it was published by Mouton in the Hague, Netherlands. Today, publishing in such an obscure outlet may cost someone tenure.
There is no denying the clear separation between the top and bottom journals in a field. But the finer gradations are far more subjective. Take, for instance, the emphasis on flagship journals by my outgoing employer, The Ohio State University.
In the school's pattern of administration (available online), it states, "Faculty of the School of Communication strive to become known for high quality research programs. Thus, tenure track faculty are expected to engage in a rigorous program of research that contributes to the advancement of the field of communication and to the prestige of the School." Later on the same page, it defines three "flagship" journals in communication, Communication Research, Human Communication Research, and the Journal of Communication.
Herein lies the rub: OSU's School of Communication has approximately 27 tenured and tenure-track faculty members. Although numbers expectations are very "hand wavy," to be considered successful, one is usually publishing 2-3 peer reviewed journal articles per year. And the communication faculty at just one university may easily be flooding these three journals with more than 50 submissions per year -- perhaps more than 100.
Although this may not directly inflate the impact factor, it does inflate another statistic, the rejection rate. The top journals all have high rejection rates. Like the best universities, the best journals are "hard to get into." And we are single-handedly increasing these journals' positions as high rent districts.
I do not point this out to fault OSU. To be clear, the system is driving this problem, not this individual school. However, just a few like-minded programs with large research faculties can unintentionally drive the field. Furthermore, we give three editors the power to decide what "matters" in communication.
If Chomsky had been held to this model (imagine him forced to publish in journals edited by behaviorists such as B. F. Skinner), stimulus-response models of cognition might still win the day.
This impact factor phenomenon also colors the process at the individual level. Just as with journals, it is popular to think that the more an individual is cited, the more important that individual's work is to the field. However, this assumes that no one is "working" the process.
It has been my observation -- and that of others, although I will not hold them accountable here -- that citation circles have developed within our field. That is, a group of 8-10 like-minded individuals have the capability to completely skew the process if they so desire.
It goes like, this: These 8-10 individuals publish in a common area. So they cite each other ... a lot. And they co-author papers together, but not all at once. So they submit their articles to the journals, and the editor is most often not an expert in that particular sub-field. So the editor looks at the citations, and invites reviews from authors cited heavily within the paper.
But wait! That is within the circle. So there is no blind review. And even if the other circle members do not know the paper's authors with certainty, the paper is well within their scientific paradigm, and it cites them a lot. This means that if it gets published, it makes the reviewer look good. So Henry Ford is proud, and the assembly line is pumping.
The papers have all the trappings of science. They look like science. They "quack" like science, if you will. But they are nothing like programmatic science. They are simple regurgitations of a handful of meager ideas.
I'm not alleging any smoke filled rooms or Roswell-esque conspiracies. Read about flattery. It's hard to be "mean" to people who are kissing your butt. Even if the reviewers are trying to be impartial, the social psychology literature suggests that they cannot.
So, there go the numbers like a runaway train. If you confuse success with visibility, you will then seek to be visible. And if you narrowly quantify success and then do everything in your power to light up that scoreboard, then the numbers will follow. What happens to a baseball team when its players begin chasing individual stats?
Science will suffer. Sure, progress will be made. But it will be made in spite of most of the research being done, rather than on the backs of most of the research being done. It's sad, really. It's a sad day when a leading newspaper can publish an admission by a journal editor that they send out a boilerplate letter urging more citations, and it is not a national scandal.
But the fact that it was reported is a sign that the runaway train had better watch out ... there might just be light at the end of the tunnel.
Begley, S. (2006, June 5). Science journals artfully try to boost their rankings. Wall Street Journal, pp. B-1.
My colleague, Dr. Matthew Nisbet, just forwarded an interesting article from the Wall Street Journal regarding the active manipulation of journal impact factors. Most people outside of academe are familiar with the "publish or perish" label. Although I find this characterization to be terribly misleading, the need to demonstrate an active research program does come with drawbacks.
As professors at research universities, we are expected to publish the results of our research in academic journals. In order to have one's work published, it is "blind" reviewed by other "peer" researchers in the area. Since research specializations are often quite small, the degree to which these reviews are actually blind varies. That is, I know who does what in studying emotional, attention, and media.
Some journals are better than others. However, calling one journal "better" than another is somewhat akin to calling chocolate ice cream better than vanilla ice cream. So there are quantitative indicators. One of these indicators is the impact factor. This statistic is kept by ISI Thompson, and it provides an index of how often a journal is referenced by other journal articles.
The thought is that if your work is important, then other people will cite it. The more that work in a particular journal is cited, the higher its impact factor climbs. Some journals are not tracked by ISI. Thus, they have no impact factors. They are the lepers of science.
Keep in mind that I am a quantitative scientist. All of my research involves numbers. I love numbers. In this respect, I am like the Count from Sesame Street. "I love to count. Ah Ah Ah." However, when you introduce quantitative indicators to science, you immediately pervert the process.
The Wall Street Journal article (Begley, 2006) provides evidence that some journals are actively trying to manipulate their impact factors. That is, after an article is basically accepted, the WSJ reports that the American Journal of Respiratory and Critical Care Medicine asks every author to cite more papers from that journal before publication.
This is as blatant of a manipulation of the process as I can imagine. But impact factor perversion is just the tip of the iceberg. As I have written before, numbers chasing threatens science at every level.
Take the NFL sack record as an analogy. New York Giants defensive end Michael Strahan set the all-time single season sack record in January 2002 when he took down Green Bay Packers quarterback Brett Favre in the fourth quarter. But the play looked suspicious. It looked like a "gimme." Everybody in that stadium knew Strahan needed one sack for the record, and the takedown looked as if it could have been completed by a punter. Favre denied handing Strahan the sack, but few people who see the footage believe it. The record is tainted.
Allow me to give you a few more examples of how we are counting ourselves stupid in American science.
The very notion of impact factors has, I argue, a chilling effect. It is difficult to get truly "new" work published. For instance, I have two papers that won top paper awards from an academic society that are awaiting journal homes. They are difficult to publish. They are new. They are not trendy. They do no fit high impact journals. So they are published in a journal also known as my desk drawer.
So you run a risk as a young scholar. Ground-breaking work runs the risk of slow or no acceptance. Better to tow the line. Do menial work and cite the big names in the field. Imagine if all of science behaved this way.
Budding linguist Noam Chomsky published Syntactic structures in 1957. Chomsky has gone on to be one of the most cited scholars of his generation, and no one can deny the influence of the 1957 volume on modern linguistics. But Syntactic structures was not published by the biggest or best publisher. Instead it was published by Mouton in the Hague, Netherlands. Today, publishing in such an obscure outlet may cost someone tenure.
There is no denying the clear separation between the top and bottom journals in a field. But the finer gradations are far more subjective. Take, for instance, the emphasis on flagship journals by my outgoing employer, The Ohio State University.
In the school's pattern of administration (available online), it states, "Faculty of the School of Communication strive to become known for high quality research programs. Thus, tenure track faculty are expected to engage in a rigorous program of research that contributes to the advancement of the field of communication and to the prestige of the School." Later on the same page, it defines three "flagship" journals in communication, Communication Research, Human Communication Research, and the Journal of Communication.
Herein lies the rub: OSU's School of Communication has approximately 27 tenured and tenure-track faculty members. Although numbers expectations are very "hand wavy," to be considered successful, one is usually publishing 2-3 peer reviewed journal articles per year. And the communication faculty at just one university may easily be flooding these three journals with more than 50 submissions per year -- perhaps more than 100.
Although this may not directly inflate the impact factor, it does inflate another statistic, the rejection rate. The top journals all have high rejection rates. Like the best universities, the best journals are "hard to get into." And we are single-handedly increasing these journals' positions as high rent districts.
I do not point this out to fault OSU. To be clear, the system is driving this problem, not this individual school. However, just a few like-minded programs with large research faculties can unintentionally drive the field. Furthermore, we give three editors the power to decide what "matters" in communication.
If Chomsky had been held to this model (imagine him forced to publish in journals edited by behaviorists such as B. F. Skinner), stimulus-response models of cognition might still win the day.
This impact factor phenomenon also colors the process at the individual level. Just as with journals, it is popular to think that the more an individual is cited, the more important that individual's work is to the field. However, this assumes that no one is "working" the process.
It has been my observation -- and that of others, although I will not hold them accountable here -- that citation circles have developed within our field. That is, a group of 8-10 like-minded individuals have the capability to completely skew the process if they so desire.
It goes like, this: These 8-10 individuals publish in a common area. So they cite each other ... a lot. And they co-author papers together, but not all at once. So they submit their articles to the journals, and the editor is most often not an expert in that particular sub-field. So the editor looks at the citations, and invites reviews from authors cited heavily within the paper.
But wait! That is within the circle. So there is no blind review. And even if the other circle members do not know the paper's authors with certainty, the paper is well within their scientific paradigm, and it cites them a lot. This means that if it gets published, it makes the reviewer look good. So Henry Ford is proud, and the assembly line is pumping.
The papers have all the trappings of science. They look like science. They "quack" like science, if you will. But they are nothing like programmatic science. They are simple regurgitations of a handful of meager ideas.
I'm not alleging any smoke filled rooms or Roswell-esque conspiracies. Read about flattery. It's hard to be "mean" to people who are kissing your butt. Even if the reviewers are trying to be impartial, the social psychology literature suggests that they cannot.
So, there go the numbers like a runaway train. If you confuse success with visibility, you will then seek to be visible. And if you narrowly quantify success and then do everything in your power to light up that scoreboard, then the numbers will follow. What happens to a baseball team when its players begin chasing individual stats?
Science will suffer. Sure, progress will be made. But it will be made in spite of most of the research being done, rather than on the backs of most of the research being done. It's sad, really. It's a sad day when a leading newspaper can publish an admission by a journal editor that they send out a boilerplate letter urging more citations, and it is not a national scandal.
But the fact that it was reported is a sign that the runaway train had better watch out ... there might just be light at the end of the tunnel.
Reference
Begley, S. (2006, June 5). Science journals artfully try to boost their rankings. Wall Street Journal, pp. B-1.
Labels: peer review, publishing, science
0 Comments:
Post a Comment
<< Home