Network Pruning by Greedy Subnetwork Selection

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

Abstract

<p>Recent works on network pruning show that large deep neural networks are often highly redundant and one can find much smaller subnetworks with much lower computational cost without a significant drop of accuracy. Most existing methods of network pruning are based on eliminating unnecessary neurons from the large networks. In this work, we study a greedy forward selection approach following the opposite direction, which starts from an empty network, and gradually adds good neurons from the large network. Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size. It implies that the learned weight of large networks is important to the small pruned models. Practically, for architectures in mobile setting, we find that fine-tuning networks pruned using our method outperforms training them from scratch. Our method improves all the prior arts on learning compact networks, using architectures such as ResNet, MobilenetV2, MobileNetV3 and ProxylessNet on ImageNet. Our theory and empirical results highlight the benefits of fine-tuning networks from large models over training from scratch, which is different from the findings of Liu et al. (2019b). </p>