Efficient training of deep learning models through improved adaptive sampling

Training of Deep Neural Networks (DNNs) is very computationally demanding and resources are typically spent on training-instances that do not provide the most benefit to a network’s learning; instead, the most relevant instances should be prioritized during training. Herein we present an improved version of the Adaptive Sampling (AS) method (Gopal, 2016) extended for the training of DNNs. As our main contribution we formulate a probability distribution for data instances that minimizes the variance of the gradient-norms w.r.t. the network’s loss function. Said distribution is combined with the optimal distribution for the data classes previously derived by Gopal and the improved AS is used to replace uniform sampling with the objective of accelerating the training of DNNs. Our proposal is comparatively evaluated against uniform sampling and against Online Batch Selection (Loshchilov & Hutter, 2015). Results from training a Convolutional Neural Network on the MNIST dataset with the Adadelta and Adam optimizers over different training batch-sizes show the effectiveness and superiority of our proposal.

Datos y Recursos

Información Adicional

Campo Valor
Fuente https://doi.org/10.1007/978-3-030-77004-4_14
Autor JI Avalos-López, A Rojas-Domínguez, M Ornelas-Rodríguez, M Carpio, ...
Última actualización octubre 11, 2025, 01:23 (UTC)
Creado octubre 11, 2025, 01:23 (UTC)
Publicación Capítulo
Tipo Publicación