Comparison of Parallel Versions of SA and GA...
URL: https://doi.org/10.1109/tensymp46218.2019.8971088
Deep learning (DL) is playing an increasingly important role in our lives. It has already made a huge impact in areas, such as cancer diagnosis, precision medicine, self-driving cars, predictive forecasting, and speech recognition. The painstakingly handcrafted feature extractors used in traditional learning, classification, and pattern recognition systems are not scalable for large-sized data sets. In many cases, depending on the problem complexity, DL can also overcome the limitations of earlier shallow networks that prevented efficient training and abstractions of hierarchical representations of multi-dimensional training data. Deep neural network (DNN) uses multiple (deep) layers of units with highly optimized algorithms and architectures. This paper reviews several optimization methods to improve the accuracy of the training and to reduce training time. We delve into the math behind training algorithms used in recent deep networks. We describe current shortcomings, enhancements, and implementations. The review also covers different types of deep architectures, such as deep convolution networks, deep residual networks, recurrent neural networks, reinforcement learning, variational autoencoders, and others.
Todavía no existen vistas creadas para este recurso.
Información adicional
Campo | Valor |
---|---|
Última actualización de los datos | 11 de octubre de 2025 |
Última actualización de los metadatos | 11 de octubre de 2025 |
Creado | 11 de octubre de 2025 |
Formato | HTML |
Licencia | No se ha provisto de una licencia |
Id | bdfd243f-e938-4d4d-b162-61bff29e95a2 |
Package id | a2740417-a02e-4c1a-b647-9b9bcf21cf6e |
State | active |