Volume 56, pp. 157-186, 2022.

Decomposition and composition of deep convolutional neural networks and training acceleration via sub-network transfer learning

Linyan Gu, Wei Zhang, Jia Liu, and Xiao-Chuan Cai

Abstract

Deep convolutional neural network (DCNN) has led to significant breakthroughs in deep learning. However, larger models and larger datasets result in longer training times slowing down the development progress of deep learning. In this paper, following the idea of domain decomposition methods, we propose and study a new method to parallelize the training of DCNNs by decomposing and composing DCNNs. First, a global network is decomposed into several sub-networks by partitioning the width of the network (i.e., along the channel dimension) while keeping the depth constant. All the sub-networks are individually trained, in parallel without any interprocessor communication, with the corresponding decomposed samples from the input data. Then, following the idea of nonlinear preconditioning, we propose a sub-network transfer learning strategy in which the weights of the trained sub-networks are recomposed to initialize the global network, which is then trained to further adapt the parameters. Some theoretical analyses are provided to show the effectiveness of the sub-network transfer learning strategy. More precisely speaking, we prove that (1) the initialized global network can extract the feature maps learned by the sub-networks; (2) the initialization of the global network can provide an upper bound and a lower bound for the cost function and the classification accuracy with the corresponding values of the trained sub-networks. Some experiments are provided to evaluate the proposed methods. The results show that the sub-network transfer learning strategy can indeed provide good initialization and accelerate the training of the global network. Additionally, after further training, the transfer learning strategy shows almost no loss of accuracy and sometimes the accuracy is higher than if the network is initialized randomly.

Full Text (PDF) [5.8 MB], BibTeX

Key words

deep convolutional neural networks, decomposition and composition, parallel training, transfer learning, domain decomposition

AMS subject classifications

68W10, 68W40

< Back