Please add comments and discuss this paper – the liveliness of the discussion will help us decide the most suitable papers to be presented at Alt-HCI in September.
Abstract: Peer reviewing of papers is the mainstay of modern academic publishing but it has well known problems. In this paper, we take a statistical modelling view to show a particular problem in the use of selectivity measures to indicate the quality of a conference. One key problem with the process of conference reviewing is the failure to make a useful feedback loop between the referees of the papers accepted at the conference and their importance, acceptance and relevance to the audience. In addition, we make some new criticisms of selectivity as a measure of quality.
This paper is literally a work in progress because the 2012 BCS HCI itself conference will be used to close the feedback loop by making the connection between the reviews provided on papers and your (audience) perceptions of the papers. At the conference, participants will generate the results of this work.