publication . Bachelor thesis . 2015

Deep learning for multivariate financial time series

Batres-Estrada, Bilberto;
Open Access English
  • Published: 01 Jan 2015
  • Publisher: KTH, Matematisk statistik
  • Country: Sweden
Abstract
Deep learning is a framework for training and modelling neural networks which recently have surpassed all conventional methods in many learning tasks, prominently image and voice recognition. This thesis uses deep learning algorithms to forecast financial data. The deep learning framework is used to train a neural network. The deep neural network is a Deep Belief Network (DBN) coupled to a Multilayer Perceptron (MLP). It is used to choose stocks to form portfolios. The portfolios have better returns than the median of the stocks forming the list. The stocks forming the S&P 500 are included in the study. The results obtained from the deep neural network are c...
Subjects
arXiv: Computer Science::Neural and Evolutionary Computation
Related Organizations
Download from

2 Neural Networks 5 2.1 Single Layer Neural Network . . . . . . . . . . . . . . . . . . 6 2.1.1 Artificial Neurons . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 Activation Function . . . . . . . . . . . . . . . . . . . 7 2.1.3 Single-Layer Feedforward Networks . . . . . . . . . . . 11 2.1.4 The Perceptron . . . . . . . . . . . . . . . . . . . . . . 12 2.1.5 The Perceptron As a Classifier . . . . . . . . . . . . . 12 2.2 Multilayer Neural Networks . . . . . . . . . . . . . . . . . . . 15 2.2.1 The Multilayer Perceptron . . . . . . . . . . . . . . . . 15 2.2.2 Function Approximation with MLP . . . . . . . . . . . 16 2.2.3 Regression and Classification . . . . . . . . . . . . . . 17 2.2.4 Deep Architectures . . . . . . . . . . . . . . . . . . . . 18 2.3 Deep Belief Networks . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.1 Boltzmann Machines . . . . . . . . . . . . . . . . . . . 22 2.3.2 Restricted Boltzmann Machines . . . . . . . . . . . . . 24 2.3.3 Deep Belief Networks . . . . . . . . . . . . . . . . . . . 25 2.3.4 Model for Financial Application . . . . . . . . . . . . . 27

3 Training Neural Networks 31 3.1 Back-Propagation Algorithm . . . . . . . . . . . . . . . . . . 31 3.1.1 Steepest Descent . . . . . . . . . . . . . . . . . . . . . 31 3.1.2 The Delta Rule . . . . . . . . . . . . . . . . . . . . . . 32 Case 1 Output Layer . . . . . . . . . . . . . . . . . . . 33 Case 2 Hidden Layer . . . . . . . . . . . . . . . . . . . 33 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1.3 Forward and Backward Phase . . . . . . . . . . . . . . 34 Forward Phase . . . . . . . . . . . . . . . . . . . . . . 34 Backward Phase . . . . . . . . . . . . . . . . . . . . . 34 3.1.4 Computation of for Known Activation Functions . . 35

5 Experiments and Results 63 5.1 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.2 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.3.1 Summary of Results . . . . . . . . . . . . . . . . . . . 69

A Appendix 77 A.1 Statistical Physics . . . . . . . . . . . . . . . . . . . . . . . . 77 A.1.1 Logistic Belief Networks . . . . . . . . . . . . . . . . . 78 A.1.2 Gibbs Sampling . . . . . . . . . . . . . . . . . . . . . . 78 A.1.3 Back-Propagation: Regression . . . . . . . . . . . . . . 79 A.2 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

[Bastien et al., 2012] Bastien, F; Lamblin, P; Pascanu, R; Bergstra, J; Goodfellow, I; Bergeron, A; Bouchard, N; Warde-Farley, D and Bengio, Y. Theano: New Features and Speed Improvements. NIPS 2012 Deep Learning Workshop.

[Bengio, 2009] Bengio, Yoshua. Learning Deep Architectures For Artificial Intelligence. Foundations and Trends in Machine Learning, Vol. 2 No. 1 (2009) 1-127. 2¸009 Y. Bengio DOI: 10.1561/2200000006.

[Bengio et al., 2006] Bengio, Yoshua; Lamblin, Pascal; Popovici, Dan; Larochelle, Hugo. Greedy Layer-Wise Training of Deep Networks. Technical Report 1282, Département d'Informatique et Recherche Opérationelle. August 21, 2006.

[Bengio, 2012] Bengio, Yoshua. Practical Recommendations for GradientBased Training of Deep Architectures. Version 2, Sept. 16th, 2012.

[Bergstra at el., 2010] Bergstra, J; Breuleux, O; Bastien, F; Lamblin, P; Pascanu, R; Desjardins, G; Turian, D; Warde-Farley, D and Bengio, Y. Theano: A CPU and GPU Math Expression Compiler. Proceedings of The Python for Scientific Computing Conference. (SciPy) 2010. June 30-July 3, Austin, Tx.

[Kuremoto et al., 2014] Kuremoto, Takashi; Obayashi, Masanao; Kobayashi, Kunikazu; Hirata, Takaomi; Mabu, Shingo. Forecast Chaotic Time Series Data by DBNs. The 2014 7th International Congress on Image and Signal Processing, Page(s): 1130 - 1135 DOI: 10.1109/CISP.2014.7003950, Date of Conference: 14-16 Oct. 2014. [OpenAIRE]

[Larochelle et al., 2009] Larochelle, Hugo; Bengio, Yoshua; Louradour, Jérôme; Lamblin, Pascal. Exploring Strategies for Training Deep Neural Networks. Journal of Machine Learning Research 1 (2009) 1-40.

[LISA Lab, 2014] LISA Lab, University of Montreal. Theano: Theano Documentation. Release 0.6, November 12, 2014.

[Salakhutdinov and Hinton, 2012] Salakhutdinov, Ruslan; Hinton, Geoffrey E. An Efficient Learning Procedure for Deep Boltzmann Machines. Neural Computation, 24, 1967-2006 (2012). 2012.

[Takeuchi et al., 2013] Takeuchi, Lawrence; Lee, Yu-Ying. Applying Deep Learning to Enhance Momentum Trading Strategies in Stocks. http://cs229.stanford.edu/proj2013/TakeuchiLee- ApplyingDeepLearningToEnhanceMomentum TradingStrategiesInStocks.pdf December 12, 2013.

Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue