= PDF Reprint, = BibTeX entry, = Online Abstract
T. Furlanello, Z. C. Lipton, M. Tschannen, L. Itti, A. Anandkumar, Born Again Neural Networks, In: Proceedings of the 35th International Conference on Machine Learning (ICML 2018), (Dy, Jennifer, Krause, Andreas Ed.), Vol. 80, pp. 1607-1616, Stockholmsmässan, Stockholm Sweden:PMLR, Mar 2018. [2018 acceptance rate: 25.1%] (Cited by 539)
Abstract: Knowledge Distillation (KD) consists of transferring “knowledge” from one machine learning model (the teacher) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the student’s compactness, without sacrificing too much performance. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating the effect of the teacher outputs on both predicted and non-predicted classes.
Themes: Machine Learning
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Fri 07 Jan 2022 12:58:29 PM PST