Strongly Proper Losses
[Summary & Contributions] | [Relevant Publications]
Summary and Contributions
The notion of ‘proper’ losses plays an important role in machine learning; in particular, algorithms that minimize ‘strictly proper’ losses allow one to accurately estimate the underlying conditional label distribution of the data. We defined the notion of ‘strongly proper’ losses, which satisfy a stronger property than that satisfied by strictly proper losses; strongly proper losses facilitate the easy derivation of statistical guarantees for machine learning algorithms (in particular, they facilitate the easy derivation of ‘regret transfer bounds’), and are thus a valuable fundamental primitive. (The relation between strictly and strongly proper losses is analogous to the relation between strictly and strongly convex functions, where the latter have come to play an important role in the field of mathematical optimization.) Strongly proper losses have since proved useful in many machine learning settings: in addition to their use in bipartite ranking problems (where we initially defined them), they have also been used both in our own work and by other researchers to prove guarantees for multiclass learning algorithms, multi-label learning algorithms, and noise-corrected learning algorithms for learning from noisy labels, among others.
Relevant Publications
- Harish G. Ramaswamy, Mingyuan Zhang, Shivani Agarwal, and Robert C. Williamson.
Convex calibrated output coding surrogates for low-rank loss matrices, with applications to multi-label learning.
In preparation. - Mingyuan Zhang, Jane Lee, and Shivani Agarwal.
Learning from noisy labels with no change to the training process.
In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021.
[pdf] - Mingyuan Zhang, Harish G. Ramaswamy, and Shivani Agarwal.
Convex calibrated surrogates for the multi-label F-measure.
In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
[pdf] - Shivani Agarwal.
Surrogate regret bounds for bipartite ranking via strongly proper losses.
Journal of Machine Learning Research, 15:1653-1674, 2014.
[pdf] - Shivani Agarwal.
Surrogate regret bounds for the area under the ROC curve via strongly proper losses.
In Proceedings of the 26th Annual Conference on Learning Theory (COLT), 2013.
[pdf]