Social software and academic reviews
I don’t really know why, but I seem to be spending a lot of time at the moment reviewing proposals and contributions for conferences and publications. And whilst there is much to be learned from all the ideas being put forward it is time consuming and sometimes feels a very isolated and perhaps archaic process.
I fond it difficult to decide the standards or criteria I am reviewing against. How important is clarity of thinking, originality, creativity? How important is it that the author includes copious references to previous work? Are we looking for depth or breadth? How important is the standard of English, particularly for those writing in a second or third language?
In this world of social software the whole review process seems somewhat archaic. It relies very much on individuals, all working in isolation. People write an abstract according to a call for proposals (and I am well aware of how difficult it is to write such calls – unless of course it is one of these multi track conferences which just include everything!). The reviews are allocated to a series of individuals for blind review. They do their work in isolation and then according to often subjective criteria, the proposal is accepted or rejected.
OK, sometimes there is the opportunity to make a conditional acceptance based on changes to the proposal. and of course, you are encouraged to provide feedback to the author. But all too often feedback is limited and pressure of time prevents organisers allowing a conditional acceptance.
How could social software help with this? As usual I think it is a socio technical solution we need to look for, rather than an adoption of technologies per se. Most conferences have adopted software to help with the conference organising and review procedures but as happens all to often that software has been developed to manage existing processes more efficiently with no thought into how we could transform practices.
One big issue is the anonymity of the review procedure. I can see many reasons to support this, but it is a big barrier to providing support in improving submissions. If we move to non blind reviewing, then we could develop systems to support a discourse between submitters and reviewers, where both become part of the knowledge creation process. and in added benefit of such a discourse could be to clarify and make transparent the criteria being used for reviews. reviewers would have more of a role as mentors rather than assessors or gatekeepers.
This would not really require sophisticated technological development. It would really just need a simple booking system to arrange for a review and feedback session, together with video, audio or text conferencing functionality. More importantly perhaps it might help us in rethinking the role of individual and collective work in the academic and scholarly forms of publishing and knowledge development. I suspect a considerable barrier is the idea of the ‘Doctor Father’ – that such a process would challenge the authority of professors and doctorate supervisors. My experience, based on talking to many PhD students, is that the supervisory role does not work particularly well. It was developed when the principle role of universities was research and was designed to induct students into a community of practice as a researcher. With the changing role of universities plus the fact that many students are no longer committed to a long term career in academia (even if they could get a job) such processes have become less than functional. Better I think to develop processes of support based on wider communities than the narrow confines of a single university department.