//
How good is this conference? Evaluating conference reviewing and selectivity

Paper #154 — Harold Thimbleby and Paul Cairns
How good is this conference? Evaluating conference reviewing and selectivity

download full paper

Please add comments and discuss this paper – the liveliness of the discussion will help us decide the most suitable papers to be presented at Alt-HCI in September.

Abstract:  Peer reviewing of papers is the mainstay of modern academic publishing but it has well known problems. In this paper, we take a statistical modelling view to show a particular problem in the use of selectivity measures to indicate the quality of a conference. One key problem with the process of conference reviewing is the failure to make a useful feedback loop between the referees of the papers accepted at the conference and their importance, acceptance and relevance to the audience. In addition, we make some new criticisms of selectivity as a measure of quality.

This paper is literally a work in progress because the 2012 BCS HCI itself conference will be used to close the feedback loop by making the connection between the reviews provided on papers and your (audience) perceptions of the papers. At the conference, participants will generate the results of this work.

Discussion

7 thoughts on “How good is this conference? Evaluating conference reviewing and selectivity

  1. Good to see someone addressing these issues, but i was surprised the REF did not get in there. Quality of outputs is being judged by people who are not expert in the field. REF panelists will argue, however, that quality can be judged at a meta level as it were – and since the quality is only on a 5 point scale there will be quite a lot of agreement across the panelists. Are we too hung up about referencing related work as part of the quality judgement?

    Posted by david benyon | July 17, 2012, 11:41 am
  2. My tuppence worth…..

    Well, I’m doing HCI to try and make the world a better place (and to know how to do that, and know how to communicate the successful ideas to others, so they can build on and do a better job than I have done, etc); that isn’t what REF is doing. (And REF is a peculiarly UK thing; HCI is international.)

    I hope our paper will help people think about ways to improve HCI conferences and refereeing; this is a job we hope the HCI community can engage in. While I can criticise the REF endlessly, I’ve no idea how to actually improve the REF given that it’s a straitjacketed behemoth that doesn’t really want to evaluate HCI per se anyway, nor change its ways.

    But, David, what did you think of the paper, rather than the REF?

    Posted by Harold Thimbleby | July 30, 2012, 9:54 pm
  3. I don’t disagree with the main claim of this paper, i.e. selectivity is an imperfect measure for the quality of a conference because of the factors that go into coming up with its programme. However, while I suspect that the use of “quality” as a peer-review criteria is a significant factor, I’m sure it’s not the only one.

    What I’d like to see in the paper and/or resulting discussion in the workshop are more ideas for other measures that might be useful in addition to “quality”.

    To seed some creative thinking around these, it’s worth reminding ourselves that conferences bring the community together, so we can show off the work we’ve been doing. By presenting our work, we get feedback that pushes our research forward.
    If we accept that conferences act as crucibles for advancing the state of the art, what other factors can we think of besides “quality” or “novelty” for determining whether to provide a forum for a piece of work or not?
    I can think of several already, and I imagine other people can too. Maybe the authors can collect some of these during the conference?

    Posted by failys | July 31, 2012, 11:45 am
  4. Although set in the context of departmental seminars, this brings to mind another Alt-HCI submission: “Hackinars: tinkering with academic practice”. https://althci2012.wordpress.com/papers/hackinars-tinkering-with-academic-practice/

    Also the Tiree Tech Wave that I’ve been running and various ‘unconferences’ – different formats that are less about presenting finished work, and more about the generation of new ideas and getting things done.

    Posted by alandix | July 31, 2012, 12:32 pm
  5. I think this paper creates an opportunity to talk about what it means for a paper and aconference to be ‘good’
    there may be many kinds of goodness so it helps to clarify which we are talking about at any one time
    then, how do you measure it?
    And if it is too hard or too expensive, which proxy do you use
    and how well does the proxy correlate with the real thing
    Finally, how to you handle the risk that the proxy stops being thought of as a proxy but as an actual measure of what you want to measure?
    I don’t know if we can give the authors access to the data for this conference that they need. Can we?
    Finally, as a suggestion for the questionnaire, there are different kinds of goodness in a conference paper.
    I may like a paper that I think is mad because it provokes a really interesting discussion at the conference.
    I may like it more than one that says something that I agree with.
    Can both be “good”? even if I think one is wrong?

    Posted by Michael Bernard Twidale | August 1, 2012, 10:35 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Archives

Categories

  • No categories
%d bloggers like this: