Not Far From, but Close to, the Madding Crowd Review
Eileen A. Joy, Department of English Southern Illinois University-Edwardsville, Edwardsville, IL
I wandered lonely as a cloud
That floats on high o’er vales and hills,
When all at once I saw a crowd,
A host, of daffodils.
Crowds and mobs have a bad rap. It will partly be my aim here to rehabilitate somewhat the reputation of the crowd and to also argue that we are always already a crowd — we do not research or think or write in a vacuum and we live with crowds of voices in our head every day: this is not schizophrenia but the reality of a world in which we have become much better aware of the ways in which our subjectivity is always inter-. As the neuroscientist Francesco Varela once wrote, the ‘mind is not in the head’: ‘the constitution of a mind is always concurrent with the extended presence of other minds in a network’ (Varela, 1998).1 We do not think and write in a vacuum, as I have just stated, so why do we want our work to be reviewed in one? Further, isn’t it about time we began to set aside and even refuse the image of the humanist as one who (heroically, or tragically) works alone at her desk with her books and austerely conceptualized interlocutors, with her career supposedly hinging on the praise of her ‘singular’ genius above her peers? We might recall that the term ‘genius,’ in the Latin, originally meant the ‘tutelary’ god or attendant spirit who ushered persons into the world, watched over them, and then conducted them into the afterworld. We might work harder to serve as each other’s tutelary and attendant spirits: genius, I truly believe, is a collective endeavor.
In all honesty (and I know I am an impartial judge), I am more than happy with how the crowd review of postmedieval’s Becoming Media issue turned out. Although the crowd review technically ran from mid-July through mid-September in 2011, visitors continue to migrate to the crowd review site to, I imagine, just take some glances here and there through the original essays, review comments, and the editors’ brief vision statement, and as of February 5, 2012, the site has had over 6,300 page views.2 The essays themselves received almost fifty sets of engaged comments and critique, totaling 24,000 words, which is almost half of the total words generated in the issue’s six original essays. In my mind, this is a success, if a qualified one, but why, more particularly, do I think that? After all, sheer numbers alone do not really answer the question of why the results of our experiment in crowd review might make a compelling case for the value of more open, transparent, and collaborative forms of academic review, and so I will highlight here the more specific reasons why I think this crowd review was valuable and how it might point a way forward at a time when, as Kathleen Fitzpatrick has written, the production of knowledge is still very much ‘the academy’s very reason for being,’ yet at the same time, ‘if we cling to an outdated system for the establishment and measurement of authority at the very same time that the nature of authority is shifting around us, we run the risk of becoming increasingly irrelevant to the dominant ways of knowing of contemporary culture’ (Fitzpatrick, 2011b, 17).
First, we have to reflect that postmedieval’s crowd review benefited from a lot of stage management in advance — indeed, the amount of work that went into both setting up the crowd review and also reaching out to the specialized and more general academic public (through individual emails, weblog posts, Facebook and Twitter announcements, list-serv messages, and the like) was intensive and occasionally exhausting, but ultimately rewarding. I’m not entirely sure that it is an experiment that we could repeat with every single issue of postmedieval that we publish (simply because of the sheer labor required to make any crowd review substantively useful for the editors and authors involved), and this reminds me as well of something Fitzpatrick has been saying repeatedly: that in order for more open, collective (and even so-called post-publication) forms of review to really work, ‘we need to value work done on behalf of a community as much as we do work that serves ourselves,’ and we ‘have to stop displacing our judgment onto other entities, like journals and presses, and instead do the difficult work of evaluation ourselves’ (Fitzpatrick, 2011a). And what this also means, of course, is that we have to work collectively to build energetic and invested communities (see Fitzpatrick, 2011b, 41–43), involving peer-to-peer relationships both within and across disciplines, but also relationships that extend to institutional and non-institutional spaces beyond the university proper. Libidinal as well as gift economies are required, and like happiness, must be worked at, as an activity.
Second, the essays for postmedieval’s Becoming Media issue were solicited in advance by the issue’s two editors (Jen Boyle and Martin Foys) and they received some expert review by them before emerging into the crowd review context, and some of the essays may have received comments in other contexts prior to being received by Jen and Martin. I know, for example, that Whitney Trettien blogged and tweeted portions of her essay prior to the official crowd review period and she also maintains a public wiki where she keeps all of her notes, annotations, and bibliography relative to her various writing projects. I belabor this point because it was not the mission of this crowd review to ask potential reviewers to assess whether or not these essays are worth publishing or not. To a certain extent, that had already been decided by the issue’s editors, although, just as with an edited volume of essays, all of the authors involved understood that the crowd review process did serve as a form of ‘external’ review of their work for this special issue of the journal, and they all revised accordingly with Jen and Martin’s expert guidance — but also, one imagines, with their own sense of which comments best served the purposes of their separate essay projects: in other words, the authors still maintained sole control of the overall direction and content of their individual essays, and that is important to me as someone who would like to further enlarge the public domains within which scholarly projects (conceptualized, for me, as desiring-machines) can have larger and freer rein. With Fitzpatrick, I envision a future in which more work is published without too many editorial restraints prior to publication, and post-publication review, where work rises and falls according to its merits (or lack thereof), as determined by a disciplinary community, or according to whoever might need it at any given time for whatever reasons, becomes the measure of ‘achievement’ (a term in sore need, I might add, of a more generously-contoured definition and field of play).
Returning to the idea of peer-to-peer review as a gift economy, Bonnie Wheeler draws attention in her essay, ‘The Ontology of the Scholarly Journal and the Place of Peer Review,’ to what she sees as the admirable idea of ‘a kind of 1960s e-commune,’ as proposed by Fitzpatrick in her book Planned Obsolescence, in which everyone participates in the process of producing and evaluating scholarly work (Wheeler, 2011, 317), which, I might add, inspired postmedieval’s crowd review, especially Fitzpatrick’s expansive re-defining of what we mean by ‘peer,’ to include not just the specialized experts of one’s narrow sub-fields but also members of the more broad intellectual community, both within and outside the university proper. (It must be admitted here, however, that the 50 or so persons who left comments on our crowd review website were all either professors or graduate students, although some of them work in fields unrelated to either medieval or early modern studies. We cannot say who may have looked in but never commented.) Our crowd review was also inspired by Fitzpatrick’s idea that more open processes of peer review might help us to better model Bill Readings’ University of Thought, where ‘Thought does not function as an answer but as a question (Readings, 1990, 159).
You can pile on the evidence in both directions (keep peer review in the humanities just the way it is or trade it in for something completely different) and what it adds up to for me is very simple: we need external review of our work (and isn’t this also like stating the obvious: we need readers?), but the processes and ambits of review can be improved and expanded. For me, the ultimate virtues of postmedieval’s crowd review experiment are thus the following:
- The process is not just open and transparent (thereby ensuring, I really believe, better behavior on the part of reviewers who are, in a sense, performing their function in a public space, or e-commons) but is, even more importantly, collaborative: authors and reviewers were able to converse back and forth on specific points raised in the review comments. One cannot stress enough that this is a mutually beneficial exchange, one in which both the author and the reviewer work toward enlarging each other’s domain of thought and expertise. (It must be noted that in our crowd review experiment, only two of the six authors spent time responding to posted comments, so this is something I would want to improve the next time around.)
- Instead of receiving just one or two sets of formal external review comments, which typically take two or more months for an author to receive, and with typically little bargaining room as regards accepting or rejecting these comments (and which, in traditional peer review, are literally the only mechanism by which anything gets published at all), the crowd review begins with the premise that the work being reviewed already has a legitimate place at the table of published-if-still-in-progress scholarship (the crowd review itself is a form of publication, and each of the essays in the Becoming Media issue had been solicited ahead of time by the issue’s editors, so public online comments were not directed at whether or not these essays should eventually receive publication in print, and this cannot be stressed enough). The author receives multiple sets of comments, often very quickly, which she can then sift through for the most meaningful and helpful criticisms and/or suggestions for further research and thought. The crowd review, in other words, models scholarship as a richly co-productive and inter-subjective process, not just an end product that supposedly leapt out of one person’s mind with the assistance of one or two secret ‘supervisors.’
- Because the crowd review throws its net as widely as possible, in terms of individual reviewers but also fields and disciplines, it can involve reviewers from outside one’s discipline, which actually helps all of us to be better communicators of our expertise and subject areas to a wider audience, both within and outside of the university proper. It helps us to make the case that we do ‘in here’ (within the university, within academic journals, within scholarly books, within conferences, and the like) has something to say to the larger, public intellectual community, the members of which are always more varied and dispersed than we often imagine. I don’t think we need to spend time wringing our hands, either, over who is really reading us, or not: the important thing is to make our thought and work as widely available as possible.
- The ideal crowd review does not distinguish between nor hierarchize specialist versus non-specialist comments, faculty versus graduate student comments, and so on. The crowd review, therefore, models a learning process in which you never know where your best ideas (or advice for revision) might come from. Everyone has something to teach someone else. Yes, some ‘expert’ reviewers (and they were invited to participate, be assured) might have access to certain forms of knowledge that are hard-won and not readily known to everyone else (and potentially very helpful to the author who may be wishing to succeed within the ambit of a certain specialized audience), but the bottom line is that the crowd review models a domain of knowledge/learning in which ‘rank’ or ‘location’ within the academy is beside the point. Ultimately, in the ‘best case’ scenario, an author would receive advice from the narrowly-defined specialist most intimately familiar with the author’s subject matter and methodologies, as well as from other scholars who work within the same time period (but on different subjects), from graduate students who are typically knee-deep in all sorts of reading lists and developing specialized projects of their own, and even from scholars working in other time periods and fields who might share certain theoretical and methodological concerns with the author and can give more meta-theoretical advice. This sort of diversity is not possible in traditional peer review.
- The crowd review enables flexibility of reviewing, which leads to more people participating. Because of the ‘webby’ format (essentially, a weblog interface), commentators can decide to review an entire essay or they can just comment on a portion of an essay to which they feel they can add something of value, critical or otherwise. This is immensely helpful as regards getting people to contribute labor that is, as we all know, mainly uncompensated.
- The crowd review unfolds and proceeds in a non-traditional space, the interwebs, where those who are not participating as reviewers can at least look in and see how the process works and learn from that; in short, the crowd review offers a model of processural, collaborative scholarship which everyone, vocally or silently present, can gain something from, even if it is just to learn how to professionally critique someone else’s work or how scholars think out loud in dialogue with each other.
- The crowd review makes visible what has always been true about the intellectual and scholarly life, but which is often only quietly articulated in the notes of acknowledgment in articles and books: we think and work together; our brains are already crowd-sourced, so why not make that fact more tangible?
This last point actually brings me to what I think might be the most valuable aspect revealed in our crowd review experiment: Emergence. What I see as the real hope of this crowd review is that it will help all of us to see that our scholarship is emerging all the time (and not in a linear or chronological fashion, either): it can never really be only an end-product (the article or the book or database or edition or whatever).
We could never really locate the beginnings of our thought and intellectual projects, just as we could never locate their end(s). Instead of thinking of our c.v.’s as documents that reflect a list of publications that mark the discrete stages of our careers, we might reflect that the crowd review itself encourages us to remember that no idea is ever settled, and no article or piece of scholarly work is ever ‘finished’ (where did it even really begin?), and more importantly, we do not work alone in solitary cells (although sometimes our studies and offices feel that way), in opposition to each other (hoping to outdo each other, race each other to some finish line, trump each other’s arguments/reasoning, ‘scoop’ each other on new methods and texts, etc.), but rather collectively bring ideas to the surface of a shared ground and light. If we can embrace the idea that the university’s mission should be to seek to democratize and enlarge what it is possible to think (while, of course, also encouraging dissensus and critique), then I think we move closer to the heart of what we might call a heterotopic multiversity, one that might attend more deeply to the question of space, which is almost more critical than time when it comes to our personal and scholarly lives.
What room do we have to think — to live, also, to breathe, to move about — and how can we multiply and re-dimensionalize and extend the spaces within which it becomes, for the largest number of persons possible, more possible to think, and to work, and to meaningfully communicate one’s ideas, to be heard, and to hear in return? This is not only a question of personal freedom and creativity (and thus also of personal happiness, and personal thriving), but also of care, of how we might work harder to care, not just for own work (and whether it might ‘succeed’ or ‘fail’ according to the traditional benchmarks for determining such matters), but for the work of others whose subject matter and methodologies might even be unattractive (at first glance) to us. A more open, crowd-sourced process of peer review, then, won’t be about what does or doesn’t ‘make the mark’ or about whatever does or does not get published in whatever journal or book (traditional print publication might almost be beside the point); rather, it would be an opportunity to build together a larger, more expansive, and more hospitable ‘commons’ for a more baroquely appointed house of thought, as well as a more open, thriving humanities. It won’t be about running away from the madding crowd, or, cadging from Emerson, keeping ‘with perfect sweetness the independence of solitude,’3 but rather, about cultivating better the larger company we have always kept. And who knows where that might lead?
1. See also Varela, Thompson, and Rosch (1991) and Varela (1999).
2. You can see the archived crowd review here: http://postmedievalcrowdreview. wordpress.com/. It is important to note here that this number is culled from WordPress.com’s statstics for the crowd review website, and that it represents the total number of unique page views, which might be one person re-visiting the same page or several pages several times during a day, a week, and so on, and may also represent the essay authors and issue editors themselves checking in to see how the crowd review was progressing, etc. Moreover, some of the page views may have come from spambots trawling the internet or from persons who landed there accidentally and then quickly moved on. This is all just to say that 6,300 page views does not equal 6,300 separate interested and invested persons visiting the crowd review site; nevertheless, even subtracting the probable repeat visits and accidental and random ‘hits,” for a special issue of an academic journal devoted to medieval cultural studies, the overall number of page views feels significant, and given that over fifty persons left comments, many of them quite detailed and substantive, it would appear that postmedieval’s crowd review experiment generated a substantive and meaningful amount of ‘traffic.’
3. I must note here that, regardless of my enthusiasm for crowds, for the collective, and for collaborative forms of scholarship, thanks to Michael Cobb, I also appreciate the importance of thinking about ‘the isolated figures of the “single” who are misconstrued as lonely figures. They might not be lonely — they might just want to be antisocial’ (Cobb 2007, 455). We have to make room, too, in the university, for those who don’t want to run with crowds.
Boyle, J. and M. Foys. 2011. Vision Statement. postmedieval — crowd review [website], http://postmedievalcrowdreview.wordpress.com/editors-vision-statement/.
Cobb, M. 2007. Lonely. In, After Sex? On Writing Since Queer Theory, eds. Janet Halley and Andrew Parker [special journal issue]. South Atlantic Quarterly 106(3): 444–457.
Davidson, C. 2011. Does Digital Publishing Need Peer Review? [weblog post]. HASTAC. org, July 20, http://hastac.org/blogs/cathy-davidson/does-digital-publishing-need-peer-review.
Fitzpatrick, K. 2011a. Academic Publishing and Zombies. Inside Higher Ed, September 30, http://www.insidehighered.com/news/2011/09/30/planned_obsolescence_ by_kathleen_fitzpatrick_proposes_alternatives_to_outmoded_academic_journals.
Fitzpatrick, K. 2011b. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press.
Readings, B. 1996. The University in Ruins. Boston, MA: Harvard University Press.
Varela, F., E. Thompson, and E. Rosch. 1991. The Embodied Mind: Cognitive Science and Human Experience. Boston, MA: MIT Press.
Varela, F. 1998. The Cosmos Letter: Why the Mind is Not in the Head [exposition address]. http://www.expo-cosmos.or.jp/letter/letter12e.html.
Varela, F. 1999. Ethical Know-How: Wisdom and Cognition. Stanford, CA: Stanford University Press.
Wheeler, B. 2011. The Ontology of the Scholarly Journal and the Place of Peer Review. Journal of Scholarly Publishing 42(3): 307–323.