Experiment and Replication in the Humanities /Katherine Rowe

 

Experiment and Replication in the Humanities

Katherine Rowe, Department of English Bryn Mawr College, Bryn Mawr, PA

 

postmedieval’s crowd review is one of a number of recent experiments by journals in the humanities that test different models of scholarly review and seek to understand the core values they instantiate. In the US in particular, this willingness to engage in such deliberate, well-defined and open experiments has fostered an increasingly lively conversation across fields about the forms, efficacy, consistency and sustainability of peer review in its traditional and future modes. Authors and editors willing to open their review processes to public analysis in this way do a tremendous service to the humanities now, at a moment of rapid change in our means of scholarly communication. Humanists tend to be acutely uncomfortable with the risks involved in such experimentation and with the imperfect outcomes of failed processes. Yet these are essential components of genuinely exploratory work in any field. Humanists are going to need to become more comfortable with both if we are to address the challenges facing scholarly publishing in a sustainable and rigorous way.

With these challenges in mind, perhaps the most valuable–and to the humanities least familiar–outcomes of open review experiments are the reviewing websites themselves. Such archives of what Kathleen Fitzpatrick has called “peer-to-peer reviewing” make it possible to validate or challenge–via replication–earlier outcomes (Fitzpatrick, 2011, 32). Humanists rarely invent new protocols and even more rarely view our scholarly processes as artifacts to be tested and validated. We have generally ceded this mode of collaboration and assessment to the sciences. Yet unless we approach open review experiments in this way, making them available for replication and analysis, we cannot ensure the rigor required during periods of rapid change. Without a larger body of evidence to study, we will tend uncritically to repeat a few small and well-known examples.1 With such a body of evidence, a shared vocabulary can develop, allowing for systems-level conversations about the future of peer review (Fitzpatrick and Rowe, 2010). This makes possible the kind of systematic analysis being pursued by MediaCommons together with NYU Press: a year-long study to assess the strengths and limitations of peer-to-peer review for the evaluation of scholarship in different disciplines (Howard, 2010).

Looking more closely at the comments archived in postmedieval’s crowd-review (in which I participated as a reviewer), I find illuminated here a number of questions we regularly ask about public, peer-to-peer evaluation. My comments conclude with questions that the experiment did not take up but that strike me as equally pressing.

Will reviewers be frankly critical in a venue that is not anonymous? The comments archived here support the notion that some reviewers can find ways to be respectfully but pointedly critical in public venues, consistent with the outcomes of the Shakespeare Quarterly reviews in 2010 and 2011. At one level this should not be surprising, since respectful dissent is the currency of scholarly authority. We place a high premium on productive disagreement. Since the experience of guest-editing the first Shakespeare Quarterly (SQ) open review I have signed my reviews (when journals permit it) in an effort to test the viability of named reviewing for myself. I have found myself working especially hard to justify negative publication recommendations in a way that is substantive and respectful to authors. This is a labor-intensive process and also a high standard to hold to–but a good one for our fields to aspire to. It should be acknowledged, however, that in this crowd review the commenter who self-described most-explicitly as resistant, “rdc009” was also the one with the least identifying handle. Moreover, publication decisions were not at stake in this experiment. Indeed, this experiment was an exemplary case of “developmental review” of the kind one might find at the best conferences or working groups. What made it successful in that regard also makes it less effective as a test of the willingness of scholars to deliver negative publication decisions in public.

Will an opt-in process be as thorough as a traditional review process, in terms of the quality and quantity of commenting? The evidence here suggests yes. The quantity of commentary for individual essays was certainly deeper than in a traditional process (again consistent with the results of SQ’s experiments). As a non-medievalist, my impression is that there was little assessment at the macro level–that is, of an essay’s overall contribution to the field. I observed a satisfying balance of holistic and granular (paragraph-level) commentary. As one might expect of scholars aware of performing in public and aware that their comments would be archived, responses were substantive and thoughtful. For a non-specialist they were also quite illuminating. This positive effect of reading “over the shoulder” of an expert reviewer from a different field hints at a potential shift in the value of reviewing for future, post-publication contexts. It can illuminate an argument’s impact in the field and in the process provide evidence of critical reception (Fitzpatrick, 2011, 23 ff.).

When reviewer responses do not contribute to publication decisions, what incentives do they have to review? What different dynamics might develop? A prime incentive to participate as a commenter in this review seems to have been an inherent interest in or resistance to an argument or topic in the special issue. Other factors that motivated reviewing included curiosity about the experiment itself and a commitment to the journal or its editors. All these strike me as sustainable reasons for reviewers to commit labor from time to time, but perhaps not on a monthly basis. Most interestingly, this experiment hints at the possibility that reviewing itself might become assessable and countable. Substantively critical reviews are crucial in establishing the trustworthiness of the process as a means to improve the quality of scholarship field-wide. This fact underscores the value and authority of reviewers, such as rdc009, who can fulfill this role gracefully and well in public. In a public reviewing economy, those reviewers would be sought after for their combination of judiciousness, incisiveness, respectfulness and frankness. As Fitzpatrick observes, if we could find ways to credit the impact of such contributions–perhaps formalizing them as a function of editorial boards, or submitting them as a body of work for reappointment review as we might for book reviews–we would have a real solution to the problem of invisible labor in reviewing (Fitzpatrick, 2011, 23 ff.).

Do different online platforms foster different kinds of commentary? This crowd review set out explicitly to find a “webby” format (Boyle and Foys, 2011). Its results suggest we are not as far along as we need to be in understanding what such positive “webbiness” might mean for scholars. Though the blog platform permitted threading, only two threads actually developed, both with single responses rather than conversations. Reviewers tended towards standalone comments, referencing each other but not engaging in a sustained way. Only two authors posted replies to comments. Perhaps we were imitating a traditional review format? Perhaps if authors and reviewers were to become accustomed to the format, different behaviors would emerge? postmedieval made the decision to eschew paragraph-level commentary of the kind fostered at MediaCommons, a format that can tilt reviewers towards granular rather than holistic commentary, intentionally designed by The Institute for the Future of the Book to subvert the usual “top-down” text/comments hierarchy of a blog (Vershbow, 2006, cited in Fitzpatrick, 2011, 110). Interestingly, early in this experiment commenters requested reference points in the essays themselves, so paragraphs were numbered. As a result, many of us ended up with paragraph-level commentary anyhow. Yet, removed from the context of the essay itself, this resulted in a line of responses with references back to paragraphs above, rather than conversational clusters. I was someone who had initially not been a fan of paragraph-level commenting but to my surprise I found myself wishing for that option, as a better way to thread comments into the essay while leaving its flow intact. That desire intensified when I later scanned the comments as a set for this reflection. Trying to assess the nature and kinds of commentary archived here without the benefit of crowded and clustered marginalia, I found it harder to analyze how reviewers affiliated and diverged and identify the key points of contact in each essay. As a non-specialist, I found myself unable to track macro conversations that might have developed around or across the essays. These variations in platform and behavior raise a larger question. Do we see reviewing as solo or collaborative thinking? To the extent that we value the latter we need to keep exploring the formats that best foster it.

Is peer-to-peer reviewing sustainable? Several hopes lie behind the idea that a crowd (“our crowd” of interested experts) might gravitate to an open review. Chief among them is the hope that we may be able to address the challenge of sustainability in traditional peer review by expanding our pool of reviewers to include graduate students, alt-academics, scholars in other fields whose specialties overlap, amateur experts, and so on. postmedieval’s call-to-review brought many more participants than in a conventional review process would have. Yet it seems to have only modestly expanded the demographics of potential commentators. The majority of participants were advanced career academics in the fields of medieval/early modern studies. A few PhD students participated. Only a few academics participated whose primary scholarly interests fall in media studies. No non-academics appear to have participated. How we might successfully expand such a pool if we wished to remains an open question.

A number of questions outside the scope of this crowd review remain to be explored as part of a larger, systems-level conversation about peer-to-peer reviewing. Can we issue negative publication decisions in public in a way that will feel respectful and professional to most participants? How does the editor’s role change if a publication decision is opened to a crowd, rather than, as traditionally, determined as a matter of “editor’s choice” (and will authors want to be reviewed that way)? When reviewers of equal authority publicly and strongly disagree about a submission, how will editors address that? I look forward with excitement and gratitude to the experiments that will help us answer these questions.

 

Notes

1 For an uncritical rehearsal of this kind, see Brown (2011). For accounts of open reviewing that assess the available data more seriously and critically, see Cebula (2010), and Wheeler (2011).

 

References

Boyle, J. and M. Foys. 2011. Vision Statement. postmedieval — crowd review, June. (http://postmedievalcrowdreview.wordpress.com/editors-vision-statement/), accessed on 29 November 2011.

Brown, M. F. 2011. Rethinking Peer Review. American Anthropology Association blog, 9 November. (http://blog.aaanet.org/2011/11/09/rethinking-peer-review/), accessed on 29 November 2011.

Cebula, L. 2010. Peer Review 2.0? Northwest History blog, 13 September. (http://northwesthistory.blogspot.com/2010/09/peer-review-20.html).

Fitzpatrick, K. and K. Rowe. 2010. Keywords for Open Peer Review. LOGOS: The Journal of the World Book Community 21:3-4 (Winter) 249-257.

Fitzpatrick, K. 2011. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press.

Howard, J. 2010. Taking a Closer Look at Open Peer Review. The Chronicle of Higher Education Wired Campus, 11 April. (http://chronicle.com/blogs/wiredcampus/taking-a-closer-look-at-open-peer-review/30877?sid=wc&utm_source=wc&utm_medium=en), accessed on 29 November 2011.

Vershbow, B. 2006. GAM3R 7H30RY 1.1 is live! if:book, 22 May. (http://www.futureofthebook.org/blog/archives/2006/05/gam3r_7h30ry_will_go_live_toda.html), consulted 29 November 2011.

Wheeler, B. 2011. The Ontology of the Scholarly Journal and the Place of Peer Review. Journal of Scholarly Publishing 42(3): 307–323.

 

«« …the Madding Crowd Review             Saving Tenure, or Helping to Kill it? »»

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s