Re-evaluating Evaluation



Balduzzi, David, Tuyls, Karl, Perolat, Julien and Graepel, Thore
(2018) Re-evaluating Evaluation. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 31. pp. 3268-3279.

[img] Text
1806.02643v2.pdf - Submitted version

Download (794kB)
[img] Text
1806.02643v2.pdf - Submitted version

Download (794kB)

Abstract

Progress in machine learning is measured by careful evaluation on problems of outstanding common interest. However, the proliferation of benchmark suites and environments, adversarial attacks, and other complications has diluted the basic evaluation model by overwhelming researchers with choices. Deliberate or accidental cherry picking is increasingly likely, and designing well-balanced evaluation suites requires increasing effort. In this paper we take a step back and propose Nash averaging. The approach builds on a detailed analysis of the algebraic structure of evaluation in two basic scenarios: agent-vs-agent and agent-vs-task. The key strength of Nash averaging is that it automatically adapts to redundancies in evaluation data, so that results are not biased by the incorporation of easy tasks or weak agents. Nash averaging thus encourages maximally inclusive evaluation -- since there is no harm (computational cost aside) from including all available tasks and agents.

Item Type: Article
Additional Information: NIPS 2018, final version
Uncontrolled Keywords: cs.LG, cs.LG, cs.GT, stat.ML
Depositing User: Symplectic Admin
Date Deposited: 10 Dec 2018 15:06
Last Modified: 19 Jan 2023 01:09
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3029647