Evolutionary Training of Deep Belief Networks...
URL: https://doi.org/10.13053/rcs-148-3-10
Two of the most representative deep architectures are Deep Convolutional Neural Networks and Deep Belief Networks (DBNs).Both of these can be applied to the problem of pattern classification.Nevertheless, they differ in the training method: while the first is trained by backpropagation of the error through the whole network, the latter is typically pre-trained on a per-layer basis using an unsupervised algorithm known as Contrastive Divergence (CD), and then it is fine-tuned with a gradient descent algorithm.Although metaheuristic algorithms have been widely applied for hyperparameter tuning, little has been published regarding alternative methods to pre-train DBNs.In this work, we substitute the conventional pre-training method with an evolutionary optimization algorithm called the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES).The pretraining is achieved by minimizing the so-called reconstruction error.This proposal is validated on the problem of MNIST digit recognition by training a Deep Belief Network, following the methodology described by Hinton and Salakhutdinov (Science, 2006).It is also compared against the well known Genetic Algorithm (GA).We provide evidence to show that, although the computational cost is significantly highern than CD, the use of CMA-ES leads to a significantly smaller reconstruction error than CD and the GA.
Todavía no existen vistas creadas para este recurso.
Información adicional
Campo | Valor |
---|---|
Última actualización de los datos | 11 de octubre de 2025 |
Última actualización de los metadatos | 11 de octubre de 2025 |
Creado | 11 de octubre de 2025 |
Formato | HTML |
Licencia | No se ha provisto de una licencia |
Id | c334ad49-6be6-4517-8b25-5bc83b1f1741 |
Package id | bc86979a-4217-4811-8be8-483e9532d36c |
State | active |