[Tccc] Requesting open feedback to my work (Re: Promoting open on-line research)
Pars Mutaf
pars.mutaf
Thu Nov 3 12:11:57 EDT 2011
Hi Joe thanks. I think I cannot argue with your experience of course which
I don' have.
But why the following system is not useful to me?
1. I can browse others' work (e.g. arxiv)
2. I can ask questions, provide comments, get answers etc.
3. My input is archived.
4. I get comments to my work. If I don't there is a problem with my work
and I update it or see similar work.
Using this system, if I provide good feedback, I can form a network for
myself without necessarily attending conferences.
Pars
On Thu, Nov 3, 2011 at 5:44 PM, Joe Touch <touch at isi.edu> wrote:
>
>
> On 11/3/2011 7:00 AM, Pars Mutaf wrote:
> ...
>
> The reviews should be publicly available to everyone.
>>
>
> There have been attempts to explore this and other models, e.g., in no
> particular order:
>
> A- author rebuttal of reviews
> B- blind reviews
> C- double-blind process
> where the paper authors are hidden during review
> D- public reviews
> where reviews are published with the paper
> E- open reviews
> where the author sees the reviewer's names
> F- adding a venue for papers on the 'borderline' of the
> main conference
>
> Speaking as someone who has participated as a PC member in these in
> various places (as an individual, not as TCCC Chair):
>
> A was tried at Infocom (and elsewhere). The goal was to avoid a paper
> being discarded because of an incorrect review. The result was a
> substantial increase in review time (actually, it ended up resulting in
> less time for reviewers to complete their reviews due to a fixed yearly
> cycle), but no substantial change in paper handling. Most of the rebuttals
> did not point out review errors, but rather disagreed with review opinion.
>
> B is currently typical.
>
> C is used at Sigcomm and more recently at ICNP. It is intended to avoid
> favoritism, but IMO it also tends to work against systems work that has
> been vetted in workshops and symposia in parts.
>
> D has been tried for some CCR papers, where a single review or summary of
> the reviews is presented.
>
> E was tried at Global Internet a number of years ago, and nearly killed
> the meeting. Submissions went down over 50%. The result was much more
> pleasantly-written reviews, but the reviews were (IMO) less useful.
>
> F was introduced at Infocom several years ago. IMO, it simply introduced a
> second borderline, and made it very difficult to distinguish between full
> accepts and "consolation prize" accepts.
>
> All of the above were introduced to address a perceived or real concern.
> None of them was tested in a true experiment (e.g., with a control group
> during the same year). Most of them (IMO) were introduced because chairs
> believe that mechanism can address review process problems. IMO, there is
> only one good solution for all such problems:
>
> PC chairs MUST review the reviews. EVERY review. EVERY year.
> Reviews whose ranks are not substantiated by
> meaningful comment must be both discarded and
> replaced.
>
> Overall, IMO, it is useful to understand that:
>
> - reviewing is an imperfect process
>
> - a paper's quality is determined by what the reader
> receives (goodput), not what is sent (offered load) ;-)
>
> - papers are rejected because of the lack of positive comments,
> not for any single negative comment
> (so arguing each negative comment in a review
> won't fix a paper - many reviewers simply provide
> sufficient negatives to justify a decision, but
> could provide other negatives if asked)
>
> - at large conferences, papers are rejected after substantial
> decision
> e.g., at Infocom, a paper is either a unanimous reject
> by three reviewers, OR is then considered by at least
> an additional 8-10 people during the PC meeting
>
> I see none of these changing in an open process.
>
> Joe
>
>
>
More information about the TCCC
mailing list