System Is Fundamentally Broken
Science proceeds through the peer-review process. One's work -- and one's success in the field -- depends on convincing blind reviewers of the merit of one's work. In principle, the system is sound. However, in practice, the system is fundamentally flawed.
Another set of reviews came back yesterday, this time for the conference of the American Academy of Advertising. My paper with colleague Johnny Sparks was rejected. It is not the rejection that bothers me, however. I am bothered by the evidence that the process is fundamentally flawed. You see, the reviews were not that bad. In fact, the first reviewer acknowledged interesting aspects of our data. We were nit picked on the literature review. That is, the reviewers did not like the other papers we chose to cite. This is both fair and trivial at the same time. Johnny and I wrote the paper in a hurry, so the literature review was not comprehensive, admittedly. But the complaint is trivial in that neither reviewer challenged our theorizing, our data, or our interpretations. This is the science. And the science was sound. To me this is a fundamental distinction, and it appears that few others care.
To me this reflects a failure on the part of the reviewers. This manuscript may be worthless, but it seems to me that the crucial aspect of a paper -- the ideas -- are rarely closely evaluated. Instead it is the simpler, superficial aspects of the paper that are scrutinized. The difference is key. If your science is weak, it cannot be saved by editing and review. However, if your science is strong, then suggestions by reviewers and editors can make the paper better. And clearly this paper fell into the latter category. However, the reviewers failed to note this key distinction. This paper is just the latest instance in a long trend of such instances.
The trouble is that if you crack open the average communication, journalism, or advertising journal, that journal likely will be filled with weak but well polished science. The emperor, my friends, is naked. And no one seems to care.
The second issue worth considering is another fatal flaw in the review process. Reviewers provide feedback to the authors, and they provide confidential feedback to the editor/conference programmer. This provides two layers of blind review. The reviewers can say one thing to the authors and another to the editor. This forces the authors into uncertainty over why their paper was rejected. How is this second layer beneficial to science? The reviewers' names are withheld, too. Be honest. Say what you mean.
Allow me to apply this flaw to the current case. Our paper was rejected; however, the only reviewer to provide substantial feedback admitted the paper was "close" and that our discussion of results was "interesting." Now given those two things, how is it possible that this paper is not worthy of presentation at a conference? The only answer is that we do not know because the paper was not rejected based upon these comments. Instead that paper was rejected based upon numerical ratings that were not provided to us. Blind comments to the program chair also might have played a role. Again, we cannot know. But if a paper is "close," then the science is sound, and we argue about only polishing.
Science moves forward when scientists debate ideas. My ego is not fragile, and I can handle being told that someone does not like my ideas. This is the point of a conference. We are -- to quote my former mentor Tom Grimes -- in the business of ideas. The sad reality is that a narrowly applied system strangles ideas. Our journals and conferences overflow with these well polished mediocre ideas. We do not seek out and nurture good ideas. This is wrong, and it is choking the profession. It needs to change. If it does not, communication will remain a pseudo-science at the fringes and without the respect of our peers within the academy.
Another set of reviews came back yesterday, this time for the conference of the American Academy of Advertising. My paper with colleague Johnny Sparks was rejected. It is not the rejection that bothers me, however. I am bothered by the evidence that the process is fundamentally flawed. You see, the reviews were not that bad. In fact, the first reviewer acknowledged interesting aspects of our data. We were nit picked on the literature review. That is, the reviewers did not like the other papers we chose to cite. This is both fair and trivial at the same time. Johnny and I wrote the paper in a hurry, so the literature review was not comprehensive, admittedly. But the complaint is trivial in that neither reviewer challenged our theorizing, our data, or our interpretations. This is the science. And the science was sound. To me this is a fundamental distinction, and it appears that few others care.
To me this reflects a failure on the part of the reviewers. This manuscript may be worthless, but it seems to me that the crucial aspect of a paper -- the ideas -- are rarely closely evaluated. Instead it is the simpler, superficial aspects of the paper that are scrutinized. The difference is key. If your science is weak, it cannot be saved by editing and review. However, if your science is strong, then suggestions by reviewers and editors can make the paper better. And clearly this paper fell into the latter category. However, the reviewers failed to note this key distinction. This paper is just the latest instance in a long trend of such instances.
The trouble is that if you crack open the average communication, journalism, or advertising journal, that journal likely will be filled with weak but well polished science. The emperor, my friends, is naked. And no one seems to care.
The second issue worth considering is another fatal flaw in the review process. Reviewers provide feedback to the authors, and they provide confidential feedback to the editor/conference programmer. This provides two layers of blind review. The reviewers can say one thing to the authors and another to the editor. This forces the authors into uncertainty over why their paper was rejected. How is this second layer beneficial to science? The reviewers' names are withheld, too. Be honest. Say what you mean.
Allow me to apply this flaw to the current case. Our paper was rejected; however, the only reviewer to provide substantial feedback admitted the paper was "close" and that our discussion of results was "interesting." Now given those two things, how is it possible that this paper is not worthy of presentation at a conference? The only answer is that we do not know because the paper was not rejected based upon these comments. Instead that paper was rejected based upon numerical ratings that were not provided to us. Blind comments to the program chair also might have played a role. Again, we cannot know. But if a paper is "close," then the science is sound, and we argue about only polishing.
Science moves forward when scientists debate ideas. My ego is not fragile, and I can handle being told that someone does not like my ideas. This is the point of a conference. We are -- to quote my former mentor Tom Grimes -- in the business of ideas. The sad reality is that a narrowly applied system strangles ideas. Our journals and conferences overflow with these well polished mediocre ideas. We do not seek out and nurture good ideas. This is wrong, and it is choking the profession. It needs to change. If it does not, communication will remain a pseudo-science at the fringes and without the respect of our peers within the academy.
0 Comments:
Post a Comment
<< Home