[Tccc] ComSoc technical cosponsorship - ratin...

Joe Touch touchatisi.edu
Fri May 31 14:12:51 EDT 2013



 Hi, Laura,

On 5/30/2013 4:45 PM, Laura Marie Feeney wrote:
> Hi Joe,
>
> Rather than more guidelines, I would prefer to see (from IEEE and
> others) a stronger commitment to data-driven understanding of the review
> process and support for (semi-) controlled experiments on improving it.  

That's not the purpose of this form. There are MANY reasons that 
conferences are bad places to play with "how to run a conference" 
experiments:

        - it's nearly impossible to setup a true experiment from
        which to draw useful conclusions

        - changes in a given year can impact a series, creating
        damage that can take years to recover

Further, nearly every "experiment" I've ever seen has the primary 
purpose of addressing a real concern (plagiarism, favorable review 
'trading', bias, conflicts of interest, etc.), but achieves this through 
a mechanism that is intended to reduce the load on the TPC chairs and is 
not directly correlated to correcting the problem.

That said, again, the purpose of this is to help those who are appointed 
to TPCs when TCCC and other TCs endorse conferences, so they can report 
back on key issues that help the TC determine whether to endorse the 
meeting in the future.

Most of the guidelines are based on existing TCCC endorsement 
requirements or ComSoc requirements.

> I'd bet that EDAS knows things about the review process that we don't
> know ourselves...
>
> Serious work probably requires better consensus about data handling and
> sharing than we have now, especially if we want to correlate over
> multiple conferences.  But I think it would be helpful to begin this
> effort and also to raise expectations on Chairs for scientific approach
> to the problem.

Again, that's not the purpose of this information. It is intended to 
inform TCs and the ComSoc (if used in either place) as to how a 
conference review process ran.

> For specific comments on the guidelines:
>
>> 2. involvement in CFP promotion        E/A/D
>
> It's not clear that this a significant indicator of quality. (At least,
> I'm not lacking CFP spam.)

It speaks to whether the TPC is a participant in promoting the meeting, 
which can help draw in better papers.

>> 3. paper assignment for review        E/A/D
>
> What is best practice?: Even with double blinding, giving reviewers too
> much role in selection can be vulnerable to collusion or (more likely)
> just inbreeding among circles of people who have similar ideas about
> what's interesting.

Double blind is its own huge problem - it violates the IEEE Code of 
Ethics where we're supposed to provide accurate reviews. It makes it 
impossible to determine whether paper content appropriately cites the 
author's own work. And collusion is trivial - do you actually think 
those who collude won't just give each other their paper titles?

> Is there data?  Do papers tend to have correlated
> sets of reviewers and does it affect variability in review scores?  If
> correlation exists, does it persist across conferences?  Can assignment
> algorithms be designed to mitigate this?

The question relates to whether papers are reviewed by members who 
indicate expertise in an area, or whether such information is ignored.

>> 5. TPC meeting                E/A/D
>
> Disagree. TPC meetings are expensive, environmentally unfriendly, and
> reduce TPC diversity. Phone-only meetings are better, but are still
> unavoidably at 2am in either America, Europe or Asia.

Conferences are ... everything you just said.

TPC meetings in person are much more effective in discussing papers than 
any alternative, for the same reasons as in-person conferences.

>> 6. paper review process            E/A/D
>>
>>       E = considers average rank AND outlier info, discussion points
>>           also based on natural 'gap' in evaluation
>>       A = considers average rank based on natural gap in evaluation
>>       D = considers rank only
>
> I'm somewhat confused about the idea of 'natural gap'.  Conventional
> wisdom is that most conferences have a few clear accepts, many clear
> rejects, and a certain amount of randomness (along with careful
> evaluation, of course) in the middle.

The noise in the middle is usually managed with extensive TPC 
discussion. This question is being revised to make this more clear; the 
point is that a paper with 85.5 shouldn't be rejected when one with 85.6 
is accepted merely because of a 0.1 point difference.

> N) I was quite surprised that rules for delegating reviews weren't
> considered relevant. I was slightly surprised that review load wasn't
> considered important.

Don't be surprised; just suggest that they be added. I'll do that.

> N) I was disappointed that reviewer diversity (gender/national
> /institutional/year-on-year turnover) wasn't considered relevant.

It's not irrelevant, but it's not something someone inside the TPC needs 
to report to the TC. That information is visible externally.

The TPC reps aren't reporting on *everything* associated with a 
conference, just the stuff that can only be seen from inside the TPC.

Joe
_______________________________________________
IEEE Communications Society Tech. Committee on Computer Communications
(TCCC) - for discussions on computer networking and communication.
Tccc at lists.cs.columbia.edu
https://lists.cs.columbia.edu/cucslists/listinfo/tccc
 




More information about the TCCC mailing list