Skip to main content

Finding the best papers with noisy reviews.

Frederik Mallmann-Trenn ( King's College, London )

Say you are tasked to find the best 150 papers among more than 250 for conference proceedings or as part of a national exercise to maximise workload. You can ask people to review a given paper by either asking

1) Is paper A better than paper B
or
2) What’s the score of paper A?

The problem is that each review returns an incorrect response with a small probability, say 1/3. How can you assign reviews so that you will likely find the best 150 papers while the total of queries and rounds is small?

The talk is based on the paper
Instance-Optimality in the Noisy Value-and Comparison-Model---Accept, Accept, Strong Accept: Which Papers get in? [SODA 2020]

https://epubs.siam.org/doi/10.1137/1.9781611975994.131

https://arxiv.org/pdf/1806.08182.pdf

 

 

 

 

Share this: