Wednesday, September 03, 2008

Self Publishing and the Marketplace of Ideas

Well, I'm getting called out for not blogging. Shame on me, I guess.

I'm still thinking about on demand publishing.

I mentioned this last night to a well-published colleague, and he said, "The problem is that nobody reads it."

That's a match point, really. And it's eerily similar to my first thoughts about Weblogs. If everyone's speaking, who is listening?

Yet blogs are working. They are extremely influential. So much so that Procter & Gamble wanted to bring Mommy Bloggers to Cincinnati in order to influence these influencers.

So will a book that you write and publish yourself ever be read? For less than $700 (if I understand correctly), you can publish a book, register the copyright, and have it listed on online booksellers, such as Barnes & Noble.com.

Then it's up to the power of the Internet -- and if you're lucky, Oprah -- to get the book sold.

Unlikely, you say?

Well, on August, 25, 2008, I wrote about "On Demand Publishing," and today there are 5 ads on my blog for various publishing services. The Internet works.

If you have something people want to hear, they will find your content.

And rather than vanity publishing, this, to me, is the best case of the marketplace of ideas. Literally, you are not bound by an agent or a bookseller's notions of profitability. You are bound only by your ideas and your ability to come up with $500 to $700 in upfront capital.

If your idea sells, you will make back the initial investment and then some. If not, you paid perhaps $700 for the privilege of getting it off of your chest.

But the point is that the idea is out there. It is, quite literally, part of the marketplace of ideas. Some of the best ideas of all time were not popular at the time they were conceived.

Unpopular -- and even revolutionary -- ideas are just that: unpopular. If they're controversial, they are unlikely to sell many copies. Robert M. Pirsig's Zen and the Art of Motorcycle Maintenance is one of the great books of all time and sometimes referred to as "the most widely read philosophy book, ever" but was rejected by 121 publishers.

Pirsig persisted. How many others did not? Would the book had caught on if the advances of the digital press made on demand publishing earlier? I do not know.

But I think humanity must be better served when the questions begins with what people need to hear rather than what idea can be sold for a profit.

Labels: ,

Monday, August 25, 2008

On Demand Publishing: Vanity or the Future

Well, August 2008 will go down as one of my least prolific blog months ever. Too many things competing for too little time.

However, I just had a conversation with a colleague that has me once again thinking about alternative publishing.

Sadly, these syllabi will not wait. But I write this as an external Post-it note to remind me to revisit the issue soon.

In the meantime, check out Xlibris Book Publishing.

Labels: ,

Tuesday, May 06, 2008

Perish the Published: Making Texts Affordable

From the New York Times:
EDITORIAL
That Book Costs How Much?
Published: April 25, 2008
Colleges and universities will need to embrace new methods of textbook development and distribution if they want to rein in runaway costs.

Labels: , , ,

Monday, March 24, 2008

Great Advice on Publishing Academic Books

I love when I read clever things, and Rachel Toor's article on the relationship between dissertations and academic publishing was especially insightful. The piece was published in the Chronicle of Higher Education:
Last summer, I was asked to lunch by an acquaintance from another university, an assistant professor whose tenure clock was running down. She wanted some advice about publishing.

She explained that she had a year to get her dissertation turned into a book. Or else. Being an assistant professor had taken more time and energy than she had expected and now here she sat, with a year to get a book written, accepted, and into production at a good press.

Labels: , ,

Friday, February 15, 2008

Harvard Steps toward Open Publishing

Feb. 13 (Bloomberg) -- Harvard University professors may publish more research online, free to readers, after the school's arts and sciences faculty adopted a new policy that may be a blow to scholarly journal publishers.

The policy was approved in a voice vote yesterday, according to Robert Mitchell, director of communications for the 730-member arts and sciences faculty. The meeting was held at the university's Cambridge, Massachusetts, campus.
Read the entire story here.

Thanks to AEJMC Web site for alerting me to this.

I have been arguing for such a model for a while now (for example, read here). More on this topic, soon.

Labels: ,

Tuesday, January 08, 2008

Ex-Harvard Boss Eyes Publishing Changes, Too

Thanks to Wes Wise for pointing this out. From the New York Times (read entire story here).

Ex-Harvard President Meets a Former Student, and Intellectual Sparks Fly

Published: January 7, 2008

In June 2006, Peter Hopkins, a civic-minded and idealistic 2004 Harvard graduate, trekked up to his alma mater from New York for a meeting with Lawrence H. Summers, the economist and former Treasury secretary. Mr. Hopkins, who finagled the appointment through his friendship with Mr. Summers’s assistant, had a business idea: a Web site that could do for intellectuals what YouTube, the popular video-sharing site, did for bulldogs on skateboards.

The pitch — “a YouTube for ideas” — appealed to Mr. Summers. “Larry, to his credit, is open to new ideas,” Mr. Hopkins recalled recently. “He grilled me for two hours.” In the age of user-generated content, Mr. Summers did have one worry: “Let’s say someone puts up a porn video next to my macroeconomic speech?”

It took awhile, but a year after that meeting, Mr. Summers decided to invest (“a few tens of thousands of dollars,” he said, adding “not something I’m hoping to retire on”) in the site, called Big Think, which officially makes its debut today after being tested for several months.

Labels:

Sunday, December 23, 2007

Some Thoughts on Mr. Gates

Updated 6:59 p.m.: Copy cleaned up.

Time marches onward, and ideas percolate in the brain.

I try to never pass up a chance to hear someone interesting, because one never knows when a seemingly tangential comment will become a seed crystal in the mind fostering many new and wonderful thoughts. [Thanks to Robert M. Pirsig for this analogy].

This happened a few weeks ago during a faculty development workshop on book publishing hosted by Don Jugenheimer, Ph.D., chairman of the advertising department at Tech. During that workshop, we started talking about self-publishing books, specifically print-on-demand.

This got me thinking about gate keeping, the mass communications tenet the suggests that a piece of information must pass through several gates before it is ultimately published or broadcast.

I used to be a gate keeper when I worked in journalism. As sports editor, I used to make the decision about what stories made it in the paper and which did not. As an education and health care reporter for the Las Cruces Sun-News, I used to decide what got covered on my beat.

To be sure, my city editor had a hand in this -- especially when it came to big stories. But in the day-to-day operation of the beat, I decided what was a story and what was not.

In one of the landmark academic studies on gatekeeping published in Journalism Quarterly in 1949, David Manning White outlined "The 'Gate Keeper': A case study in the selection of news."
Manning writes, "It is a well known fact in individual psychology that people tend to perceive as true only those happenings which fit into their own beliefs concerning what is likely to happen. It begins to appear (if Mr. Gates is a fair representative of his class) that in his position as 'gate keeper' the newspaper sees to it (even though he may never be consciously aware of it) that the community shall hear as a fact only those events which the newsman, as the representative of his culture, believes to be true" (p. 390).
The problem is that I don't like -- and don't trust -- gates. In this space, I have complained about the peer-review system, which is the most substantial gate in academic thought (read here, here, here, here, and here). And, you see, I am just totally over it.

In part, some of our problem is being productive. We crank out a lot of research, so we get a lot of reviews back. And with few exceptions, they tend to contain the same unhelpful comments. You can see the gate, but it's a largely irrelevant gate, yet nonetheless you reshape your work to fit through the gate.

And I'm over it.

But I'm not even tenured yet. And the problem is not that I don't have a lot of things to say -- I do -- but I am tired of reshaping my thoughts in ways that are not helpful.

In my career, I have worked most often with a particular journal editor. This editor is very helpful, and s/he represents that way that the gate was intended to function. But journal articles have to be spread around, so we don't get the good fortune of right-minded editing very often. Choosing reviewers is, in my opinion, the most important thing that an editor does. And it is a rare skill.

With new media -- for lack of a better term -- things are changing. But slowly. I could, for example, make a PDF of our research and publish the results here exclusively. Google is probably a better search engine than the academic searches that I use anyway, so that might even be an improvement.

The problem is credibility. The name atop a publication lends some credibility to the information contained therein. And that is important. If everyone starts publishing, then it becomes a caveat emptor world of information to the highest degree. I keep my job because of my publishing record. The chief academic officer at Tech need not know communications research to evaluate my tenure case, as he (in this case) can look at objective rankings of the journals in which I publish.

For all the trials and tribulations of peer review, it does have its benefits. Our 2004 publication in the Journal of Consumer Psychology is a prime example. Then-editor Robert Wyer helped a lot in making our ideas worth reading. Although it was painful at the time, I am very thankful for his help in advancing our ideas.

But that represents the minority of cases. I cannot point you to a piece that represents the opposite because unhelpful reviews do not lead to publication. Often they lead to you revising an idea so greatly that it no longer resembles itself. In this case, one is often forced to scrap the revisions and revert to the original since the manuscript is now alien to its former self. This is the case with our recent publication in the Journal of Advertising [PDF not ready yet; in December 2007 issue]. In the meantime, it took the data more than 3 years to see the light of day.

So we are left with a system that marginally works some of the time. And it works
slowly such that ideas often take at least 18 months to make it into print. And that is wrong, and it must change.

Books -- the particular form of mass communications that got me thinking of this topic in the first place -- are no better.

My recently oft-quoted Zen and the art of motorcycle maintenance has sold millions of copies and is touted as the best selling book on philosophy of all time. A huge success, right? Consider the following quotation from Pirsig:
Back then, after 121 others had turned this book down, one lone editor offered a standard $3,000 advance. He said the book forced him to decide what he was in publishing for, and added that although this was almost certainly the last payment, I shouldn't be discouraged.
Here we see the case where 121 gate keepers failed, and the 122nd grossly underestimated the situation. How many other great ideas in history remained locked in the desk drawer while other drivel saw the bookshelf simple because it was likely to make a profit?

We must do better.

As I see it right now, I think the following dual-route system of publishing is in the correct direction. First, we need a rapid dispatch system of publishing. We need to get ideas out there. Therefore, I think that scholars should publish on Web sites. And they need to do it quickly.

The simplest way to do this would be to make PDFs of accepted conference papers. We submitted our American Academy of Advertising papers on Oct. 1. All three were accepted. I have know that for more than a week. By now, I could have incorporated suggested revisions and published the papers. So if you were working on a similar idea, you could learn from our mistakes ... or hopefully our good ideas.

This leaves some gate keeper in place. There is some check on credibility, and you still have to present your ideas in a public forum.

Now, what we currently do is try to revise that conference paper into a journal article. Often my lab combines conference papers into multi-experiment journal articles. This slows down the dissemination of information greatly. Think glacial pace.

Instead, I propose allowing the conference papers to stand for themselves. Leave the PDFs on your Web site, and allow the search engines to index them. All of this information would then be free to libraries. Taxpayers pay for this research, and now it would be accessible to them. If the academic conferences would agree to any standard form of publishing their program (think table of contents), the search engines could double check that the papers were actually presented.

For the second step, I suggest moving toward the print on demand technology. Write a book. Synthesize. I think this is what scholarship is about. If you've been programmatic about your work, then you will have something to say in a book. If you have done nothing but an unconnected series of studies, then you have nothing to say.

And a synthesized book would provide a great introduction to a subject matter. They could be used as textbooks and introductions. A book would also allow a researcher to go back and correct things and update information. I remember early on in my career in studying with Annie Lang of Indiana, I had written an entire paper predicated on one of her earlier papers.

When I gave it to her to review, she wrote in the margins that they had never replicated that individual finding. Oops. My bad. But the traditional publishing model does not allow for this. A journal article is concrete in time even as theories march on.

I suggest print on demand books because I don't want to simply relocate the gate keeper. Moreover, traditional publishing is about profit, and scholarship should not be about profit. In the case of book writing, we allow for a marketplace of ideas in the truest sense of the word.

This system creates a problem of separating the wheat from the chaff. I admit that this is a problem. In terms of sheer volume, we would have a genuine problem for the young scholar. There might be too many voices. How do you find out what is good?

I see one immediate way to help rectify this problem. Instead of publishing conference papers, we could create a system of peer reviewers within academic societies. In this case, an administrator (rather than an editor) could recruit reviewers of "raw" papers. These reviews would provide feedback to the author(s) but would not serve as a gate. If there is a fundamental disagreement between author and reviewer, the reviewer would have the chance to publish a counterpoint.

This would be akin to a dissent in a judicial opinion. Or the journal Behavioral and Brain Sciences, which invites open commentary. If we largely eschew the journal process, scholars will have time to do such reviews (instead of journal reviews), which are more meaningful since you would have the chance to comment back. Instead of trying to persuade an author in a double-blind way to make changes, you could add an open commentary that says, "The authors used solid methodology, and there is much to like in this study. However, I believe that these results are more parsimoniously explained by Theory X."

And again the marketplace of ideas will be at work. Time may well show that one of the peer commentaries will become the more influential piece of scholarship.

This would solve the problem of information credibility for the first step, but we still have the problem of books. A primary problem here would be copy editing, and this expense would have to come out-of-pocket from the author(s). However, I spend a couple of hundred dollars each year in journal subscriptions, and I could easily transfer this to editing.

Thus, although the books are printed on demand, I believe that the front matter of the book should have a certificate of copy editing. Technical writing companies could provide this service. In any case, the author could opt out, but you, the reader, would know that it had not been edited if there was no certificate.

For a second step, I would invite peer commentary. You could invite experts in the field, and perhaps there could be some sort of forum to post notice of publication. Would-be reviewers would have the chance for open commentary. Some sort of arbitration committee would be needed in the case that an open commentary were completely hostile. Again, academic societies could step in here.

Give these ideas a thought, and please post a comment. I would love to know what you think. I know that we need to change. And the ideas that I have expressed here seem -- in some sense -- to be a step in the right direction.

Labels: , ,

Sunday, February 11, 2007

If You're in Vermont, Stop By, See Research

Texas Tech University
College of Mass Communications
Accepted Presentations
American Academy of Advertising
Annual Meeting
Burlington, Vermont
April 12-15, 2007

Bradley, S. D. (2007, April). The roles and misdeeds of peer review in advertising. Special topics session to be conducted at the meeting of the American Academy of Advertising, Burlington, VT.

Bradley, S. D., Maxian, W., Laubacher, T. C., & Baker, M. (2007, April). In search of Lovemarks: The semantic structure of brands. Paper to be presented at the meeting of the American Academy of Advertising, Burlington, VT.

Callison, C., & Mohammed-Basin, S. (2007, April). Hey ya-shake it like a Polaroid picture: Product mention in popular music genres. Paper to be presented at the meeting of the American Academy of Advertising, Burlington, VT.

Daugherty, T., Gangadharbatla, H., Kim, Y. J., & Logan, K. (2007, April). Assessing the value of product placement from the consumer’s perspective. Paper to be presented at the meeting of the American Academy of Advertising, Burlington, VT.

Gangadharbatla, H. (2007, April). Active versus passive gamers: A comparison Of recall, attitudes and purchase intentions of brands placed in video games. Paper to be presented at the meeting of the American Academy of Advertising, Burlington, VT.

Gangadharbatla, H., & Smith, J. (2007, April). eWOM: The effect of individual level factors on viral consumers' email pass along behavior. Paper to be presented at the meeting of the American Academy of Advertising, Burlington, VT.




To give credit where credit is due, I borrowed this idea from colleague Robert F. Potter, Ph.D., in the department of telecommunications at Indiana University.

Labels: , , ,

Sunday, January 28, 2007

Science Publishing: Ideas or Dollars?

Myriad problems plague the academic publishing model. Quantifying success with numerical indicators encourages scholars to publish for publishing's sake.

Likewise, publishing in academic journals is prized far more than books or book chapters. In no small irony for faculty researchers, publishers make money on journals. Researchers can only make money from books and book chapters, which are less valued.

Reading the Chronicle of Higher Education, I ran across this article.

Publishers' Group Reportedly Hires P.R. Firm to Counter Push for Free Access to Research Results
By SUSAN BROWN
The Association of American Publishers has hired a public-relations firm with a hard-hitting reputation to counter the open-access publishing movement, which campaigns for scientific results to be made freely available to the public, the journal Nature reported on Wednesday.

The firm, Dezenhall Resources, designs aggressive public-relations campaigns to counter activist groups, according to the Center for Media and Democracy, a nonprofit organization that monitors the public-relations business.

That's right. Hire an attack dog to tackle those radicals suggesting that science -- of all things -- should be about ideas rather than profits.

A few minutes later, I saw that my former colleague Matt Nisbet also wrote about high priced journal subscriptions today. Nisbet references an excellent article in the Washington Post titled, "Publishing Group Hires 'Pit Bull of PR."

Labels: , ,

Sunday, August 20, 2006

More Thoughts on Peer Review

Updated 9:22 p.m. August 20, 2006

It seems that academic peer review is my favorite rant on this Weblog.


  • On June 10, 2006, I said that Numbers Chasing Sullies Science in reference to many insecure scholars in my discipline who prefer to publish several heaps of garbage rather than a single original idea.
  • On May 7, 2006, I said that the Academic Publishing Model Is Broken in reference to how libraries were being pillaged by journals.
  • On December 3, 2005, I said that the System Is Fundamentally Broken after I had a good paper rejected because too many good papers were submitted.
  • And on November 24, 2005, I said that Some Days You Just Want to Know because I was spending more time anticipating reviewer complaints than concentrating on good science.

My favorite pet peeve made it back to the forefront Saturday when a review came back negatively. I can handle negative reviews; they're part of the process. But this one was extremely wrong minded.

In any given field, there are only a handful of journals that matter. And when a poorly trained editor rotates into the top spot, it is a painful few years.

Poor scientific training shows itself in an editor. You can mask poor training as a researcher. But as an editor, you are exposed. If you miss the point of being a scientist, you cannot be an editor. You will make bad decisions. You will let in finely polished bad ideas and fail to recognize good ideas when they fail to conform to some standard.

My career is far from perfect, but the best decision I ever made was to head to Bloomington, Indiana. Thanks to Annie Lang, I feel that I was well trained as a scientist. I care passionately about what it means to do science. I care about the process. And I respect the process.

And I hate ... HATE ... when the trappings of science get confused with science. Are the data good? Is the idea meaningful? This is the key. Yet too many poorly trained scientists -- and editor X -- seem to not understand. They appear to have gotten their Ph.D.s as the "walks like a duck" university.

Well science that walks like a duck can still be -- and often is --bad science. Polish is confused for quality, and good ideas can be lost if they do not fit into the square hole, as the square peg should. It takes good training to know the difference ... or to care enough to look for the difference.

When I received my first journal article to review, I asked Annie for advice. As always, she came through in the clutch. She said to separate the data from the idea. Are they good? Is it good science no matter how poorly written? Can the science be saved? And ever since, I have reviewed every article with this metric. Science deserves no less.

I wish that my coauthor and I had received the same treatment. Instead I spent today using the handful of good ideas in the review to improve the paper (there's some merit to the system), which will go out to a better journal tomorrow. And rest assured, this piece will find a home. The idea is good, even if the polish was a bit smudged.

In the meantime, we wait for another right-minded editor to rotate onto another journal.

Labels: , ,

Saturday, June 10, 2006

Numbers Chasing Sullies Science

It seems that the phenomenon of scholars whoring themselves for numbers is gaining more widespread attention.

My colleague, Dr. Matthew Nisbet, just forwarded an interesting article from the Wall Street Journal regarding the active manipulation of journal impact factors. Most people outside of academe are familiar with the "publish or perish" label. Although I find this characterization to be terribly misleading, the need to demonstrate an active research program does come with drawbacks.

As professors at research universities, we are expected to publish the results of our research in academic journals. In order to have one's work published, it is "blind" reviewed by other "peer" researchers in the area. Since research specializations are often quite small, the degree to which these reviews are actually blind varies. That is, I know who does what in studying emotional, attention, and media.

Some journals are better than others. However, calling one journal "better" than another is somewhat akin to calling chocolate ice cream better than vanilla ice cream. So there are quantitative indicators. One of these indicators is the impact factor. This statistic is kept by ISI Thompson, and it provides an index of how often a journal is referenced by other journal articles.

The thought is that if your work is important, then other people will cite it. The more that work in a particular journal is cited, the higher its impact factor climbs. Some journals are not tracked by ISI. Thus, they have no impact factors. They are the lepers of science.

Keep in mind that I am a quantitative scientist. All of my research involves numbers. I love numbers. In this respect, I am like the Count from Sesame Street. "I love to count. Ah Ah Ah." However, when you introduce quantitative indicators to science, you immediately pervert the process.

The Wall Street Journal article (Begley, 2006) provides evidence that some journals are actively trying to manipulate their impact factors. That is, after an article is basically accepted, the WSJ reports that the American Journal of Respiratory and Critical Care Medicine asks every author to cite more papers from that journal before publication.

This is as blatant of a manipulation of the process as I can imagine. But impact factor perversion is just the tip of the iceberg. As I have written before, numbers chasing threatens science at every level.

Take the NFL sack record as an analogy. New York Giants defensive end Michael Strahan set the all-time single season sack record in January 2002 when he took down Green Bay Packers quarterback Brett Favre in the fourth quarter. But the play looked suspicious. It looked like a "gimme." Everybody in that stadium knew Strahan needed one sack for the record, and the takedown looked as if it could have been completed by a punter. Favre denied handing Strahan the sack, but few people who see the footage believe it. The record is tainted.

Allow me to give you a few more examples of how we are counting ourselves stupid in American science.

The very notion of impact factors has, I argue, a chilling effect. It is difficult to get truly "new" work published. For instance, I have two papers that won top paper awards from an academic society that are awaiting journal homes. They are difficult to publish. They are new. They are not trendy. They do no fit high impact journals. So they are published in a journal also known as my desk drawer.

So you run a risk as a young scholar. Ground-breaking work runs the risk of slow or no acceptance. Better to tow the line. Do menial work and cite the big names in the field. Imagine if all of science behaved this way.

Budding linguist Noam Chomsky published Syntactic structures in 1957. Chomsky has gone on to be one of the most cited scholars of his generation, and no one can deny the influence of the 1957 volume on modern linguistics. But Syntactic structures was not published by the biggest or best publisher. Instead it was published by Mouton in the Hague, Netherlands. Today, publishing in such an obscure outlet may cost someone tenure.

There is no denying the clear separation between the top and bottom journals in a field. But the finer gradations are far more subjective. Take, for instance, the emphasis on flagship journals by my outgoing employer, The Ohio State University.

In the school's pattern of administration (available online), it states, "Faculty of the School of Communication strive to become known for high quality research programs. Thus, tenure track faculty are expected to engage in a rigorous program of research that contributes to the advancement of the field of communication and to the prestige of the School." Later on the same page, it defines three "flagship" journals in communication, Communication Research, Human Communication Research, and the Journal of Communication.

Herein lies the rub: OSU's School of Communication has approximately 27 tenured and tenure-track faculty members. Although numbers expectations are very "hand wavy," to be considered successful, one is usually publishing 2-3 peer reviewed journal articles per year. And the communication faculty at just one university may easily be flooding these three journals with more than 50 submissions per year -- perhaps more than 100.

Although this may not directly inflate the impact factor, it does inflate another statistic, the rejection rate. The top journals all have high rejection rates. Like the best universities, the best journals are "hard to get into." And we are single-handedly increasing these journals' positions as high rent districts.

I do not point this out to fault OSU. To be clear, the system is driving this problem, not this individual school. However, just a few like-minded programs with large research faculties can unintentionally drive the field. Furthermore, we give three editors the power to decide what "matters" in communication.

If Chomsky had been held to this model (imagine him forced to publish in journals edited by behaviorists such as B. F. Skinner), stimulus-response models of cognition might still win the day.

This impact factor phenomenon also colors the process at the individual level. Just as with journals, it is popular to think that the more an individual is cited, the more important that individual's work is to the field. However, this assumes that no one is "working" the process.

It has been my observation -- and that of others, although I will not hold them accountable here -- that citation circles have developed within our field. That is, a group of 8-10 like-minded individuals have the capability to completely skew the process if they so desire.

It goes like, this: These 8-10 individuals publish in a common area. So they cite each other ... a lot. And they co-author papers together, but not all at once. So they submit their articles to the journals, and the editor is most often not an expert in that particular sub-field. So the editor looks at the citations, and invites reviews from authors cited heavily within the paper.

But wait! That is within the circle. So there is no blind review. And even if the other circle members do not know the paper's authors with certainty, the paper is well within their scientific paradigm, and it cites them a lot. This means that if it gets published, it makes the reviewer look good. So Henry Ford is proud, and the assembly line is pumping.

The papers have all the trappings of science. They look like science. They "quack" like science, if you will. But they are nothing like programmatic science. They are simple regurgitations of a handful of meager ideas.

I'm not alleging any smoke filled rooms or Roswell-esque conspiracies. Read about flattery. It's hard to be "mean" to people who are kissing your butt. Even if the reviewers are trying to be impartial, the social psychology literature suggests that they cannot.

So, there go the numbers like a runaway train. If you confuse success with visibility, you will then seek to be visible. And if you narrowly quantify success and then do everything in your power to light up that scoreboard, then the numbers will follow. What happens to a baseball team when its players begin chasing individual stats?

Science will suffer. Sure, progress will be made. But it will be made in spite of most of the research being done, rather than on the backs of most of the research being done. It's sad, really. It's a sad day when a leading newspaper can publish an admission by a journal editor that they send out a boilerplate letter urging more citations, and it is not a national scandal.

But the fact that it was reported is a sign that the runaway train had better watch out ... there might just be light at the end of the tunnel.

Reference

Begley, S. (2006, June 5). Science journals artfully try to boost their rankings. Wall Street Journal, pp. B-1.

Labels: , ,