Consistent Output Coding Algorithms for Multiclass and Multi-Label Learning

[Summary & Contributions] | [Relevant Publications]

Summary and Contributions

‘Output coding’ is a general term that has been used to refer to the solution of a complex learning problem by decomposing it into a suitable set of simpler (often binary) learning problems. We have developed several new results aimed at understanding the conditions under which statistically consistent output coding algorithms can be designed for various multiclass learning problems. In addition, bringing together ideas from our work on calibrated surrogate losses and strongly proper binary losses, we have designed a large family of consistent output coding algorithms that can be applied to a variety of complex machine learning tasks, including as special cases a variety of multi-label learning problems. In particular, even though such tasks may involve very large label/prediction spaces, if the target loss matrix associated with the prediction task is of low rank, say of rank r, then our algorithms involve constructing only r (carefully designed) binary prediction problems. The algorithms are easy to implement and come with quantitative regret transfer bounds that allow any performance guarantees for the constructed binary problems to be transferred to performance guarantees for the overall target learning problem.

Relevant Publications

  • Harish G. Ramaswamy, Mingyuan Zhang, Shivani Agarwal, and Robert C. Williamson.
    Convex calibrated output coding surrogates for low-rank loss matrices, with applications to multi-label learning.
    In preparation.

  • Mingyuan Zhang, Harish G. Ramaswamy, and Shivani Agarwal.
    Convex calibrated surrogates for the multi-label F-measure.
    In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
    [pdf]

  • Harish G. Ramaswamy, Balaji S. Babu, Shivani Agarwal and Robert C. Williamson.
    On the consistency of output code based learning algorithms for multiclass learning problems.
    In Proceedings of the 27th Annual Conference on Learning Theory (COLT), 2014.
    [pdf]

Back to Research