Modeling Higher-Order Choices
[Summary & Contributions] | [Relevant Publications]
Summary and Contributions
In many applications, including for example marketing and customer surveys, it is natural for humans to express choices among sets of more than two items at a time. One then receives higher-order choice data as input (as opposed to pairwise comparison data); the goal is again to construct a global ranking over items or identify the top few items from such data. We developed new algorithms for learning from higher-order choice data, both in the usual statistical setting and in the active, bandits setting; we termed the latter setting ‘choice bandits’. Our results for choice bandits apply to a broad class of probabilistic discrete choice models that includes as special cases the multinomial logit (MNL) model and random utility models with IID noise (IID-RUMs), which are widely studied in the marketing and econometrics literature, but that extends beyond them (see figure below).

Relevant Publications
- Arpit Agarwal, Nicholas Johnson, and Shivani Agarwal.
Choice bandits.
In Advances in Neural Information Processing Systems (NeurIPS), 2020.
[pdf] - Arpit Agarwal, Prathamesh Patil, and Shivani Agarwal.
Accelerated spectral ranking.
In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018.
[pdf]