The Power of Deep Learning: Current Research and Future Trends | ||||
Menoufia Journal of Electronic Engineering Research | ||||
Article 13, Volume 28, Issue 2, July 2019, Page 217-244 | ||||
Document Type: Original Article | ||||
DOI: 10.21608/mjeer.2019.62778 | ||||
View on SCiNiTO | ||||
Authors | ||||
Ahmed Ghozia* ; Gamal Attiya; Nawal El-Fishawy | ||||
Dept. of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University. | ||||
Abstract | ||||
Deep learning, in general, is about multi layered neural networks copying the structure and intellectual procedure of the human mind. Rather than handcrafted features, it permits the procurement of knowledge straightforwardly from information. They relapse mind boggling target works in a nested system, where more complex forms with bigger receptive fields are estimated using less abstract ones. Deep learning additionally makes it conceivable to consider formal domain knowledge and supplant an extensive collection of traditional algorithmic methods with flexible differentiable modules. These all strengthen and empower deep learning and make it adaptable while establishing the connection between the input information and target yield. Research frontiers are presently moving toward the rest of the difficulties. This paper presents a complete overview about deep learning. It illustrates where did deep learning initiate from, what had been accomplished using deep learning, What research areas are currently being investigated via deep learning, and most importantly What are the challenges and open problems of deep learning - as those are the issues, once handled, will lead to achieve the general conscious Artificial Intelligence (AI). The purpose is to empower graduates, practitioners, researchers and fans toward a powerful cooperation in the field of deep learning. | ||||
Keywords | ||||
Deep learning; Neural Networks; Reinforcement Learning; unsupervised learning; generative models; meta learning; Symbolic AI | ||||
References | ||||
ebkit-text-stroke-width: 0px; "> [1] L. Deng and X. Li, “Machine learning paradigms for speech recognition: An overview,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 5, pp. 1060–1089, 2013. [2] L. Deng, D. Yu et al., “Deep learning: methods and applications,” Foundations and TrendsR in Signal Processing, vol. 7, no. 3–4, pp. 197– 387, 2014. [3] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning. MIT press Cambridge, 2016, vol. 1. [4] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012. [5] J. Heaton, N. Polson, and J. H. Witte, “Deep learning for finance: deep portfolios,” Applied Stochastic Models in Business and Industry, vol. 33, no. 1, pp. 3–12, 2017. [6] S. Min, B. Lee, and S. Yoon, “Deep learning in bioinformatics,” Briefings in bioinformatics, vol. 18, no. 5, pp. 851–869, 2017. -size-adjust: auto; -webkit-text-strok[7] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep learning for healthcare: review, opportunities and challenges,” Briefings in bioinformatics, 2017. [8] P. Badjatiya, S. Gupta, M. Gupta, and V. Varma, “Deep learning for hate speech detection in tweets,” in Proceedings of the 26th International Conference on World Wide Web Companion. International World Wide Web Conferences Steering Committee, 2017, pp. 759–760. [9] N. G. Polson and V. O. Sokolov, “Deep learning for short-term traffic flow prediction,” Transportation Research Part C: Emerging Technologies, vol. 79, pp. 1–17, 2017. [10] S. Kubrick and A. C. Clarke, “A space odyssey,” Hollywood, California, USA: Metro-Goldwyn-Mayer, 2001. [11] K. N. Rosenfeld, “Terminator to avatar: A postmodern shift,” 2010. [12] C. E. Shannon, “Xxii. programming a computer for playing chess,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 41, no. 314, pp. 256–275, 1950. [13] J. McCarthy and P. J. Hayes, “Some philosophical problems from the standpoint of artificial intelligence,” in Readings in artificial intelligence. Elsevier, 1981, pp. 431–450. [14] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctotet al., “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, p. 484, 2016. [15] “Deep Neural Networks Architecture functional description,” https://cdn.edureka.co/blog/wp-content/uploads/2017/05/ Deep-Neural-Network-What-is-Deep-Learning-Edureka.png, accessed: 2018-0710. [16] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org. [17] “Deep Neural Networks historical timeline,” https://towardsdatascience.com/ a-weird-introduction-to-deep-learning- 7828803693b0, accessed: 2018-07-10. [18] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26–38, 2017.-size-adjust: auto; -webkit-text-strok[7] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep learning for healthcare: review, opportunities and challenges,” Briefings in bioinformatics, 2017. [8] P. Badjatiya, S. Gupta, M. Gupta, and V. Varma, “Deep learning for hate speech detection in tweets,” in Proceedings of the 26th International Conference on World Wide Web Companion. International World Wide Web Conferences Steering Committee, 2017, pp. 759–760. [9] N. G. Polson and V. O. Sokolov, “Deep learning for short-term traffic flow prediction,” Transportation Research Part C: Emerging Technologies, vol. 79, pp. 1–17, 2017. [10] S. Kubrick and A. C. Clarke, “A space odyssey,” Hollywood, California, USA: Metro-Goldwyn-Mayer, 2001. [11] K. N. Rosenfeld, “Terminator to avatar: A postmodern shift,” 2010. [12] C. E. Shannon, “Xxii. programming a computer for playing chess,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 41, no. 314, pp. 256–275, 1950. [13] J. McCarthy and P. J. Hayes, “Some philosophical problems from the standpoint of artificial intelligence,” in Readings in artificial intelligence. Elsevier, 1981, pp. 431–450. [14] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctotet al., “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, p. 484, 2016. [15] “Deep Neural Networks Architecture functional description,” https://cdn.edureka.co/blog/wp-content/uploads/2017/05/ Deep-Neural-Network-What-is-Deep-Learning-Edureka.png, accessed: 2018-0710. [16] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org. [17] “Deep Neural Networks historical timeline,” https://towardsdatascience.com/ a-weird-introduction-to-deep-learning- 7828803693b0, accessed: 2018-07-10. [18] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26–38, 2017. ize-adjust: auto; -webkit-text-stroke-[19] S. Hong, S. Kwak, and B. Han, “Weakly supervised learning with deep convolutional neural networks for semantic segmentation: Understanding semantic layout of images with minimum human supervision,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 39– 49, 2017. [20] X. Chu, W. Ouyang, H. Li, and X. Wang, “Structured feature learning for pose estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4715–4723. [21] C. Deng, X. Liu, C. Li, and D. Tao, “Active multi-kernel domain adaptation for hyperspectral image classification,” Pattern Recognition, vol. 77, pp. 306–315, 2018. [22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672– 2680. [23] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 53–65, 2018. [24] C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating videos with scene dynamics,” in Advances In Neural Information Processing Systems, 2016, pp. 613–621. [25] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, “Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling,” in Advances in Neural Information Processing Systems, 2016, pp. 82–90. [26] K. Schawinski, C. Zhang, H. Zhang, L. Fowler, and G. K. Santhanam, “Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit,” Monthly Notices of the Royal Astronomical Society: Letters, vol. 467, no. 1, pp. L110–L114, 2017. [27] A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning withDeep Convolutional Generative Adversarial Networks,” ArXiv e-prints, Nov. 2015. | ||||
Statistics Article View: 407 |
||||