Stand back, all! Something takes shape within the swirling mist!

Part of the cover of the album Ptoof! by the Deviants

The significant portion of the wraparound poster that formed the cover of the first album by The Deviants, Ptoof! (Underground Impresarios 1968). I didn’t think of this post just so as to use this image, but I could have… And of course, for those that know, it’s a memorial of sorts to yet another dead rocker, the inimitable and scurrilous Mick Farren, who preceded Lemmy (and now David Bowie, it’s like some musical plague out there) to the great rock’n’roll swindle roundup by dying on stage a couple of years ago already. That’s gone fast…

So now, after that interlude, back to the second half of that post about journals and publishing, the part to which I originally wanted to get. Geoffrey Tobin put his finger on the heart of the matter, as have so many, when he pointed out in a comment to the previous one that scholars don’t usually get paid for publishing. We do the research as part of our salaries, usually, or from whatever grant pays our salaries while we don’t do our jobs so as to get some research done; we have to publish the outcomes of it for professional recognition and advancement; we are what you’d call a captive market. At the other end, the publishers have to stay in business and ideally make a profit, and so they have the interest in capturing revenue that we don’t. But the messy bit is the middle ground, and most especially peer review, which has to be done by academics, but traditionally at least is neither recompensed or of much professional use to us. It’s good for institutional or departmental prestige if we can say that we act as referees for presses people have heard of, I imagine, but our employers would probably rather at least that we do it on our own time (in as much as academics can calculate such boundaries) or that we didn’t do it at all, so as to deliver the maximum for our institution. Nonetheless, academic publishing couldn’t go on in its current model without peer review, and we all want to get published so like to help publishers when they ask, and so it struggles on. The same kind of things can be said about actually editing journals or book series and so forth; it’s vital work, but it’s not usually for our employers so it largely goes unrewarded.

Well, in Australia at least people have started making a noise about this, demanding review work be recognised in their national research assessment, as reported by Alice Meadows on the Wiley blog (them again) here. That would be one way, and a good one I think, though it will still surprise me if it’s adopted, and still more so here in England (unless it’s review work for England-based journals; but almost all journal publishers are multinationals now…). But there has also lately emerged another way that might actually be a way forward. I think it has come out of automated journal submission systems like ScholarOne or Open Journal Systems, but we now have two organisations who are trying to actually turn academic labour like this into a marketable service. The first is ORCID, which is a service offering something like a DOI for researchers, rather than research, so that links to projects and manuscript submissions and so on can all be aggregated. They say:

“ORCID is an open, non-profit, community-driven effort to create and maintain a registry of unique researcher identifiers and a transparent method of linking research activities and outputs to these identifiers. ORCID is unique in its ability to reach across disciplines, research sectors and national boundaries. It is a hub that connects researchers and research through the embedding of ORCID identifiers in key workflows, such as research profile maintenance, manuscript submissions, grant applications, and patent applications.”

Well, I’m pretty sure our names worked for this already, but ORCID is interested in tracking things that our institutions have generally not been, and it is also tracking the work we do in the industry at large, not just our institutions.

And then, more interestingly in some ways, there is also Rubriq, a portal that manages peer review of manuscripts by maintaining as large a database of potential reviewers as possible, thus exceeding the personal networks that usually limit the effective ‘blindness’ of peer review in the humanities, and actually paying those reviewers for prompt review, even if not very much. This has caused some controversy, but apparently it does get the reviews in on time. It’s not an economically viable payment, really, for the work involved, less than we’d get for contract teaching, but it does at least signify that the work is worth something. Rubriq, in turn, then charges the journals it serves for access to their reviewing service.

Now this is an inversion of the usual revenue flow in academic publishing, which is of course all to the publishers. Instead, here while the publishers are still the point where money enters the system, there is a trickle-down to the academy. It’s tiny, of course, if ideologically significant, but together with ORCID it offers the possibility of an outside assessment of our service work, usually unrecognised, in terms of quality and value that we might present to our employers, or through them to our funders, in England of course usually somehow the state. Of course the cynical maxim, “If you’re not paying for the product, you are the product,” applies to both these models. ORCID may be a non-profit but its operating revenues are still earned by the participation of recognition-hungry academics who don’t themselves expect to get paid, and it’s those academics who give ORCID anything to offer. Rubriq likewise only has something to offer if it genuinely has lots of people on board from all over the place, and they are getting paid but without them Rubriq has no product.

But still, maybe this works? If we wind up working on a commission basis for new third-parties who enable peer review (which would become better and faster), whom the publishers pay in turn, and then subscribers continue to pay the publishers, that seems to me potentially to break the current squeeze in which the only way we can meet expectations is to do more that we don’t have time for for free. Our service work could be quantified, valued even, and counted into our assessments. It would be, after all, a form of outside consultancy. Meanwhile the publishers, whose costs would now be higher, would maybe make less per unit but might well have more units and could compete for quality in new ways. It still wouldn’t balance but it would balance better. Only thing is, I’m still not sure how we pay for open access

9 responses to “Stand back, all! Something takes shape within the swirling mist!

  1. There is one aspect, which I belive your two posts do not touch upon: the fact that literally no one reads the many articles in journals or book collections. Estimates are that only 20% of all scientifically published papers get read at all let alone quoted. The good news, however, is that books are read; which means that writing books is what academics should do.
    The challenge here is of ourse that books do not count as much as research sliced out in paper-thin publications! At least, this is the case here in Denmark. In stead of experimenting with all sorts of systems of getting peer reviewing organised/paid for… In the old days ( last millenium when I was young) we lectured. Each lecture represented a paper aka a chapter in a book; that was it…

    • Literally no-one? I answer this from the Institute of Historical Research where I’m chasing down four different journal articles in two different London libraries that I can’t get any other way today (two non-electronic of which one no longer publishing and two still inside moving walls), so I might challenge that! An awful lot of my reading and what I set to my students is still journal articles. I agree that outside the Academy they basically go unread, because who can afford a book’s price for thirty pages? And even inside the Academy, a great many individual articles may go unread and it’s been a very long time since I worked through a journal issue cover to cover just to see what was going on, though that was a luxury I used to enjoy. But inside the Academy, where books are too long to read and journal articles go through much tougher peer review anyway, the article is still a primary means of scholarly communication I think!

  2. When I try to find sources behind claims like “only (some small percentage) of scholarly articles are read/cited within n years,” they seem to be based on work in natural sciences which have transitioned to digital publication of short pieces. That makes it easy to get metrics by cribbing from Google scholar and other citation-tracking software and waving one’s hands a lot, but it does not work in disciplines whose publications are not included in the databases which the software draws on, and where people publish longer things less frequently. In Classics and Assyriology its not unusual for something to be published on a topic every ten years, and that something may well be a paper book, so expecting a citation within an ejournal within a year or two is not reasonable. Not to say that asking which research other scholars engage with is not a good idea, just that the methods of answering the question which are often suggested do not seem appropriate for my field.

  3. Well, this is my favourite one of those studies, because it gives figures by discipline and links to sources. You’re right even here though that the figures are set within a five-year window, by which time the action is dying off in STEM and just picking up in Humanities, so it’s no wonder the figures differ so much. This difference has already been observed in a study on journal half-lives done for the British Academy that I’ve talked about here. It showed firstly that on average journals peak in citation in the humanities after about twice as long in humanities and social sciences as they do in STEM subjects—so a ten-year window would probably give you a very different picture—and secondly that humanities especially has a long tail of articles that go on being cited for years and years. Probably everybody who works in a humanities subject can think of ‘that’ article from the 1970s that you always have to cite because this bit of the field started there and so on. (For me it was usually written by David Herlihy, Nicholas Brooks or Patrick Wormald.) That kind of staying power does exist in STEM but it’s much, much rarer. So a twenty-five window wouldn’t change much, but the difference would still be dropping. And of course books are a different thing again, much less well-counted and very rare in STEM but ubiquitous in HSS. So I think there are figures on which to found such conclusions, but you’re right that they could certainly be an awful lot better.

    (Also it looks as if you were trying to include a link or something. If you let me know what that HTML was supposed to do I’ll fix it for you if you like.)

    • Bother, I should amend that: that British Academy study was not counting citations but downloads. We all know that downloading something isn’t reading it but it is still a marker of interest of some kind, even if not quite the one I implied. Whether the same logic would also apply to citations is another matter of course. Sorry to mislead.

    • The first article on bibliometrics which comes up in my Articles folder is:
      Paul Jarvey, Alex Usher, Lori McElroy, Making Research Count: Analyzing Canadian Academic Publishing Cultures. Higher Education Strategy Associates. June 2012.
      I remember it being slightly contrarian, arguing that the idea is not as anathema as many humanists see it.

  4. Yes, in my institution ORCHiD has suddenly become a pseudo-compulsory thing, which initially many took as another cynical metrically-driven move by the faceless wo/men, but, actually, having signed up I would say it seems quite good. And it does let you enter things that then become ‘countable’ that other metric based services don’t seem built to deal with (e.g. try getting SCOPUS to include book chapters and other principally HSS outputs…) I agree with your underlying point above, i.e. that unless we figure out ways of measuring what we actually do (that we’re comfortable with and which suit our discipline, instead of allowing ourselves to be led along like sumpter beasts by STEM-based assumptions) then all that work remains invisible, and therefore unrewarded, and remains part of the ‘black market economy’ of academia. [Much as feminists have long argued that women’s contribution in the home is ‘work’ despite not being undertaken in a formal context of employment, and must be considered in models of economic activity, etc., if they are to be properly representative.] Conclusion? Metrics matter, as long as you don’t make them your principal motivating or policy-driving force.

    • Well, I haven’t touched ORCHiD since I first signed up to it but there has lately been some chatter about it in my institution as well, so the insider perspective there is reassuring!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s