[Tccc] Requesting open feedback to my work (Re: Promoting open on-line research)

Joe Touch touch
Thu Nov 3 12:38:39 EDT 2011



On 11/3/2011 9:13 AM, Mukul Goyal wrote:
> Pars
>
> We had a discussion last year or so regarding the merits of the open
> review process used in IETF. I guess you could find that discussion in
> TCCC archives (I hope they exist!). I totally support the idea. People
> do have objections to such a review process. Joe mentioned the problems
> faced when a conference (Global Internet or Globcom?) used such a system
> some years ago.

It was Global Internet.

The result was "sunnier" reviews that were more pleasant to read. The 
authors didn't find the reviews more useful, though - they seemed to 
feel that reviewers held back useful info to seem more pleasant.

> I still think it is worth exploring this idea further.
> We should start an (online?) conference that uses an open review
> process.

Three suggestions:

1) *create* a conference and try this
	yes, it's a lot of work to create a conference, but
	don't "play" with the reputation of a conference series
	that others have invested many years

	it took GI many years to recover from this experiment

2) do a real experiment
	include a control at the same conference, and with the
	same papers. assign 4 reviews, and RANDOMLY ask reviewers
	to write open reviews for some papers and blind reviews
	for others

	have a group of people, including the authors, assess
	the reviews

 > I think we would have lots of teething problems but ultimately
> the system can be improved to resolve these problems.

3) clearly identify the problem you are trying to solve (this is part of 
doing a real experiment)
	i.e., is the problem "I want open reviews", or some other
	goal?

	I've seen a lot of experiments in conference operations
	that were intended to resolve a specific problem, but
	could be trivially shown not related to that problem.

Case in point:
reviewer collusion (quid pro quo, where reviewers trade positive reviews 
for each other, or reviewers "give" high reviews without merit to their 
friends)

Mechanism tried:
	- double blind reviews
		the reviewers just exchange paper titles with
		each other

Mechanism that worked (speaking from experience):
	- partially random assignment of reviews
		helps recognize that a good set of reviews includes
		a measure of the benefit of a paper to an arbitrary
		attendee, not just a closed group that favors a
		concept (whether as friends, or just as an 'inbred'
		community)
	- partially random rotation of PC members
		helps give new people a chance, and gives established
		people a break without accusing anyone of wrongdoing
	- chair review of the reviews
		with dismissal and replacement of
		high reviews without substantiation

This took a large meeting with *many* reviewer cliques (where arcs are 
reviewer gives a high review to author) to ZERO cliques in one meeting.

Experiments can be useful, but IMO are no substitute for the diligence 
of the PC and its chair.

Joe (again, as an individual)





More information about the TCCC mailing list