ºÝºÝߣ

ºÝºÝߣShare a Scribd company logo
Learning to Trade with Q-Reinforcement Learning
(A tensorflow and Python focus)
Ben Ball & David Samuel
www.prediction-machines.com
Special thanks to -
Algorithmic Trading (e.g., HFT) vs Human Systematic Trading
Often looking at opportunities existing in the
microsecond time horizon. Typically using statistical
microstructure models and techniques from machine
learning. Automated but usually hand crafted
signals, exploits, and algorithms.
Predicting the next tick
Systematic strategies learned from experience of
¡°reading market behavior¡±, strategies operating over
tens of seconds or minutes.
Predicting complex plays
Take inspiration from Deep Mind ¨C Learning to play Atari video games
+
Input
FC ReLU
FC ReLU
Functional
pass-though
Output
Could we do something similar for trading markets?
O
O
O
O
O
O
*Network images from http://www.asimovinstitute.org/neural-network-zoo/
Introduction to Reinforcement Learning
How does a child learn to ride a bike?
Lots of this
leading to
this
rather than this . . .
Machine Learning vs Reinforcement Learning
? No supervisor
? Trial and error paradigm
? Feedback delayed
? Time sequenced
? Agent influences the environment
Agent
Environment
Action atState St Reward rt & Next State St+1
Good textbook on this by
Sutton and Barto -
st, at, rt, st+1,at+1,rt+1,st+2,at+2,rt+2, ¡­
REINFORCEjs
GridWorld :
---demo---
Value function
Policy function
Reward function
Application to Trading
Typical dynamics of a mean-reverting asset or pairs-trading where the spread exhibits mean reversion
Upper rangesoft boundary
Lower range soft boundary
Mean
Priceofmeanrevertingassetorspread
Map the movement of the mean
reverting asset (or spread) into a
discrete lattice where the price
dynamics become transitions
between lattice nodes.
We started with a simple 5 node
lattice but this can be increased
quite easily.
Asset / Spread price evolving with time
State transitions of lattice simulation of mean reversion:
Short LongFlat
Spreadpricemappedontolatticeindex
i = 0
i = -1
i = -2
i = 1
i = 2
sell buy
These map into:
(State, Action, Reward)
triplets used in the QRL algorithm
Mean Reversion Game Simulator
Level 3
Example buy transaction
Example sell transaction
http://www.prediction-machines.com/blog/ - for demonstration
As per the Atari games example, our QRL/DQN plays the trading game
¡­ over-and-over
Building a DQN and defining its topology
Using Keras and Trading-Gym
+
Input
FC ReLU
FC ReLU
Functional
pass-though
Output
+
Input
FC ReLU
FC ReLU
Functional
pass-though
Output
Double Dueling DQN (vanilla DQN does not converge well but this method works much better)
target networktraining network
lattice position
(long,short,flat) state value of Buy
value of Sell
value of Do Nothing
Trading-Gym Architecture
Runner
warmup()
train()
run()
Children class
Agent
act()
observe()
end()
DQN
Double DQN
A3C
Abstract class
Memory
add()
sample()
Brain
train()
predict()
Data Generator
Random
Walks
Deterministic
Signals
CSV Replay
Market Data
Streamer
Single Asset
Multi Asset
Market
Making
Environment
render()
step()
reset()
next()
rewind()
Trading-Gym - OpenSourced
Prediction Machines release of Trading-Gym environment into OpenSource
- - demo - -
TensorFlow TradingBrain
released soon
TensorFlow TradingGym
available now
with Brain and DQN example
Prediction Machines release of Trading-Gym environment into OpenSource
References:
Insights In Reinforcement Learning (PhD thesis) by Hado van Hasselt
Human-level control though deep reinforcement learning
V Mnih, K Kavukcuoglu, D Silver, AA Rusu, J Veness, MG Bellemare, ...
Nature 518 (7540), 529-533
Deep Reinforcement Learning with Double Q-Learning
H Van Hasselt, A Guez, D Silver
AAAI, 2094-2100
Prioritized experience replay
T Schaul, J Quan, I Antonoglou, D Silver
arXiv preprint arXiv:1511.05952
Dueling Network Architectures for Deep Reinforcement Learning
Z Wang, T Schaul, M Hessel, H van Hasselt, M Lanctot, N de Freitas
The 33rd International Conference on Machine Learning, 1995¨C2003
DDDQN and
Tensorflow
Overview
1. DQN - DeepMind, Feb 2015 ¡°DeepMindNature¡±
http://www.davidqiu.com:8888/research/nature14236.pdf
a. Experience Replay
b. Separate Target Network
2. DDQN - Double Q-learning. DeepMind, Dec 2015
https://arxiv.org/pdf/1509.06461.pdf
3. Prioritized Experience Replay - DeepMind, Feb 2016
https://arxiv.org/pdf/1511.05952.pdf
4. DDDQN - Dueling Double Q-learning. DeepMind, Apr 2016
https://arxiv.org/pdf/1511.06581.pdf
Enhancements
Experience Replay
Removes correlation in sequences
Smooths over changes in data distribution
Prioritized Experience Replay
Speeds up learning by choosing experiences with weighted distribution
Separate target network from Q network
Removes correlation with target - improves stability
Double Q learning
Removes a lot of the non uniform overestimations by separating selection of action and evaluation
Dueling Q learning
Improves learning with many similar action values. Separates Q value into two : state value and state-
dependent action advantage
Keras v Tensorflow
Keras Tensorflow
High level ?
Standardized API ?
Access to low level ?
Tensorboard ?* ?
Understand under the hood ?
Can use multiple backends ?
Install Tensorflow
My installation was on CentOS in docker with GPU*, but also did locally on
Ubuntu 16 for this demo. *Built from source for maximum speed.
CentOS instructions were adapted from:
https://blog.abysm.org/2016/06/building-tensorflow-centos-6/
Ubuntu install was from:
https://www.tensorflow.org/install/install_sources
Tensorflow - what is it
A computational graph solver
Tensorflow key API
Namespaces for organizing the graph and showing in tensorboard
with tf.variable_scope('prediction'):
Sessions
with tf.Session() as sess:
Create variables and placeholders
var = tf.placeholder('int32', [None, 2, 3], name='varname¡¯)
self.global_step = tf.Variable(0, trainable=False)
Session.run or variable.eval to run parts of the graph and retrieve values
pred_action = self.q_action.eval({self.s_t['p']: s_t_plus_1})
q_t, loss= self.sess.run([q['p'], loss], {target_q_t: target_q_t, action: action})
Trading-Gym
https://github.com/Prediction-Machines/Trading-Gym
Open sourced
Modelled after OpenAI Gym. Compatible with it.
Contains example of DQN with Keras
Contains pair trading example simulator and visualizer
Trading-Brain
https://github.com/Prediction-Machines/Trading-Brain
Two rich examples
Contains the Trading-Gym Keras example with suggested structuring
examples/keras_example.py
Contains example of Dueling Double DQN for single stock trading game
examples/tf_example.py
References
Much of the Brain and config code in this example is adapted from devsisters github:
https://github.com/devsisters/DQN-tensorflow
Our github:
https://github.com/Prediction-Machines
Our blog:
http://prediction-machines.com/blog/
Our job openings:
http://prediction-machines.com/jobopenings/
Video of this presentation:
https://www.youtube.com/watch?v=xvm-M-R2fZY
Deep Learning in Python with Tensorflow for Finance

More Related Content

What's hot (6)

PDF
Cesim Bank Banking and Financial Services Management Simulation Game Guide Book
Cesim Business Simulations
?
PPTX
Unit 2 Financial Transactions
Jenny Hubbard
?
PPTX
Role of financial system
Bikramjit Singh
?
PPS
Credit default swaps
Tata Mutual Fund
?
PPTX
Strategic Management
Dr. Sonal Mandhotra
?
PPTX
Accounting Principles, 12th Edition Ch14
AbdelmonsifFadl
?
Cesim Bank Banking and Financial Services Management Simulation Game Guide Book
Cesim Business Simulations
?
Unit 2 Financial Transactions
Jenny Hubbard
?
Role of financial system
Bikramjit Singh
?
Credit default swaps
Tata Mutual Fund
?
Strategic Management
Dr. Sonal Mandhotra
?
Accounting Principles, 12th Edition Ch14
AbdelmonsifFadl
?

Similar to Deep Learning in Python with Tensorflow for Finance (20)

PDF
TensorFlow and Deep Learning Tips and Tricks
Ben Ball
?
PDF
Q-Learning with a Neural Network - Xavier Gir¨® - UPC Barcelona 2020
Universitat Polit¨¨cnica de Catalunya
?
PDF
Lec3 dqn
Ronald Teo
?
PDF
Financial Trading as a Game: A Deep Reinforcement Learning Approach
ÖtÒæ üS
?
PPTX
Reinforcement Learning For Trading
Thomas Starke
?
PDF
Machine Learning for Trading
Larry Guo
?
PDF
IRJET - Ensembling Reinforcement Learning for Portfolio Management
IRJET Journal
?
PDF
Lec6 nuts-and-bolts-deep-rl-research
Ronald Teo
?
PDF
Reinforcement learning in a nutshell
Ning Zhou
?
PPTX
deep reinforcement learning with double q learning
SeungHyeok Baek
?
PDF
deep q networks (reinforcement learning)
SudiptaMajumder18
?
PDF
Introduction2drl
Shenglin Zhao
?
PDF
Matineh Shaker, Artificial Intelligence Scientist, Bonsai at MLconf SF 2017
MLconf
?
PDF
DQN (Deep Q-Network)
Dong Guo
?
PPTX
Intro to Deep Reinforcement Learning
Khaled Saleh
?
PDF
Nick Dingwall MSc project
Nick Dingwall
?
PDF
Dueling Network Architectures for Deep Reinforcement Learning
Yoonho Lee
?
PPTX
R22 Machine learning jntuh UNIT- 5.pptx
23Q95A6706
?
PDF
5 Important Deep Learning Research Papers You Must Read In 2020
Heather Strinden
?
PPTX
Reinforcement Learning and Artificial Neural Nets
Pierre de Lacaze
?
TensorFlow and Deep Learning Tips and Tricks
Ben Ball
?
Q-Learning with a Neural Network - Xavier Gir¨® - UPC Barcelona 2020
Universitat Polit¨¨cnica de Catalunya
?
Lec3 dqn
Ronald Teo
?
Financial Trading as a Game: A Deep Reinforcement Learning Approach
ÖtÒæ üS
?
Reinforcement Learning For Trading
Thomas Starke
?
Machine Learning for Trading
Larry Guo
?
IRJET - Ensembling Reinforcement Learning for Portfolio Management
IRJET Journal
?
Lec6 nuts-and-bolts-deep-rl-research
Ronald Teo
?
Reinforcement learning in a nutshell
Ning Zhou
?
deep reinforcement learning with double q learning
SeungHyeok Baek
?
deep q networks (reinforcement learning)
SudiptaMajumder18
?
Introduction2drl
Shenglin Zhao
?
Matineh Shaker, Artificial Intelligence Scientist, Bonsai at MLconf SF 2017
MLconf
?
DQN (Deep Q-Network)
Dong Guo
?
Intro to Deep Reinforcement Learning
Khaled Saleh
?
Nick Dingwall MSc project
Nick Dingwall
?
Dueling Network Architectures for Deep Reinforcement Learning
Yoonho Lee
?
R22 Machine learning jntuh UNIT- 5.pptx
23Q95A6706
?
5 Important Deep Learning Research Papers You Must Read In 2020
Heather Strinden
?
Reinforcement Learning and Artificial Neural Nets
Pierre de Lacaze
?
Ad

Recently uploaded (20)

PDF
How to Buy Verified CashApp Accounts IN 2025
Buy Verified CashApp Accounts
?
PPTX
Bitumen Emulsion by Dr Sangita Ex CRRI Delhi
grilcodes
?
PDF
June 2025 Top 10 Sites -Electrical and Electronics Engineering: An Internatio...
elelijjournal653
?
PDF
FSE-Journal-First-Automated code editing with search-generate-modify.pdf
cl144
?
PPT
SF 9_Unit 1.ppt software engineering ppt
AmarrKannthh
?
PDF
PRIZ Academy - Process functional modelling
PRIZ Guru
?
PPTX
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
AsadShad4
?
PDF
CLIP_Internals_and_Architecture.pdf sdvsdv sdv
JoseLuisCahuanaRamos3
?
PDF
Rapid Prototyping for XR: Lecture 4 - High Level Prototyping.
Mark Billinghurst
?
PDF
Rapid Prototyping for XR: Lecture 1 Introduction to Prototyping
Mark Billinghurst
?
PPTX
Mobile database systems 20254545645.pptx
herosh1968
?
PDF
????? ?? ??????? ?????????? ????? ?????? ??? ????.pdf
???? ??? ?????
?
PDF
lesson4-occupationalsafetyandhealthohsstandards-240812020130-1a7246d0.pdf
arvingallosa3
?
PPTX
Computer network Computer network Computer network Computer network
Shrikant317689
?
PDF
Rapid Prototyping for XR: Lecture 5 - Cross Platform Development
Mark Billinghurst
?
PDF
May 2025: Top 10 Read Articles in Data Mining & Knowledge Management Process
IJDKP
?
PPTX
Tesla-Stock-Analysis-and-Forecast.pptx (1).pptx
moonsony54
?
PDF
Rapid Prototyping for XR: Lecture 6 - AI for Prototyping and Research Directi...
Mark Billinghurst
?
PPTX
FSE_LLM4SE1_A Tool for In-depth Analysis of Code Execution Reasoning of Large...
cl144
?
PDF
Generative AI & Scientific Research : Catalyst for Innovation, Ethics & Impact
AlqualsaDIResearchGr
?
How to Buy Verified CashApp Accounts IN 2025
Buy Verified CashApp Accounts
?
Bitumen Emulsion by Dr Sangita Ex CRRI Delhi
grilcodes
?
June 2025 Top 10 Sites -Electrical and Electronics Engineering: An Internatio...
elelijjournal653
?
FSE-Journal-First-Automated code editing with search-generate-modify.pdf
cl144
?
SF 9_Unit 1.ppt software engineering ppt
AmarrKannthh
?
PRIZ Academy - Process functional modelling
PRIZ Guru
?
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
AsadShad4
?
CLIP_Internals_and_Architecture.pdf sdvsdv sdv
JoseLuisCahuanaRamos3
?
Rapid Prototyping for XR: Lecture 4 - High Level Prototyping.
Mark Billinghurst
?
Rapid Prototyping for XR: Lecture 1 Introduction to Prototyping
Mark Billinghurst
?
Mobile database systems 20254545645.pptx
herosh1968
?
????? ?? ??????? ?????????? ????? ?????? ??? ????.pdf
???? ??? ?????
?
lesson4-occupationalsafetyandhealthohsstandards-240812020130-1a7246d0.pdf
arvingallosa3
?
Computer network Computer network Computer network Computer network
Shrikant317689
?
Rapid Prototyping for XR: Lecture 5 - Cross Platform Development
Mark Billinghurst
?
May 2025: Top 10 Read Articles in Data Mining & Knowledge Management Process
IJDKP
?
Tesla-Stock-Analysis-and-Forecast.pptx (1).pptx
moonsony54
?
Rapid Prototyping for XR: Lecture 6 - AI for Prototyping and Research Directi...
Mark Billinghurst
?
FSE_LLM4SE1_A Tool for In-depth Analysis of Code Execution Reasoning of Large...
cl144
?
Generative AI & Scientific Research : Catalyst for Innovation, Ethics & Impact
AlqualsaDIResearchGr
?
Ad

Deep Learning in Python with Tensorflow for Finance

  • 1. Learning to Trade with Q-Reinforcement Learning (A tensorflow and Python focus) Ben Ball & David Samuel www.prediction-machines.com
  • 3. Algorithmic Trading (e.g., HFT) vs Human Systematic Trading Often looking at opportunities existing in the microsecond time horizon. Typically using statistical microstructure models and techniques from machine learning. Automated but usually hand crafted signals, exploits, and algorithms. Predicting the next tick Systematic strategies learned from experience of ¡°reading market behavior¡±, strategies operating over tens of seconds or minutes. Predicting complex plays
  • 4. Take inspiration from Deep Mind ¨C Learning to play Atari video games
  • 5. + Input FC ReLU FC ReLU Functional pass-though Output Could we do something similar for trading markets? O O O O O O *Network images from http://www.asimovinstitute.org/neural-network-zoo/
  • 7. How does a child learn to ride a bike? Lots of this leading to this rather than this . . .
  • 8. Machine Learning vs Reinforcement Learning ? No supervisor ? Trial and error paradigm ? Feedback delayed ? Time sequenced ? Agent influences the environment Agent Environment Action atState St Reward rt & Next State St+1 Good textbook on this by Sutton and Barto - st, at, rt, st+1,at+1,rt+1,st+2,at+2,rt+2, ¡­
  • 11. Typical dynamics of a mean-reverting asset or pairs-trading where the spread exhibits mean reversion Upper rangesoft boundary Lower range soft boundary Mean Priceofmeanrevertingassetorspread Map the movement of the mean reverting asset (or spread) into a discrete lattice where the price dynamics become transitions between lattice nodes. We started with a simple 5 node lattice but this can be increased quite easily. Asset / Spread price evolving with time
  • 12. State transitions of lattice simulation of mean reversion: Short LongFlat Spreadpricemappedontolatticeindex i = 0 i = -1 i = -2 i = 1 i = 2 sell buy These map into: (State, Action, Reward) triplets used in the QRL algorithm
  • 13. Mean Reversion Game Simulator Level 3 Example buy transaction Example sell transaction
  • 14. http://www.prediction-machines.com/blog/ - for demonstration As per the Atari games example, our QRL/DQN plays the trading game ¡­ over-and-over
  • 15. Building a DQN and defining its topology Using Keras and Trading-Gym
  • 16. + Input FC ReLU FC ReLU Functional pass-though Output + Input FC ReLU FC ReLU Functional pass-though Output Double Dueling DQN (vanilla DQN does not converge well but this method works much better) target networktraining network lattice position (long,short,flat) state value of Buy value of Sell value of Do Nothing
  • 17. Trading-Gym Architecture Runner warmup() train() run() Children class Agent act() observe() end() DQN Double DQN A3C Abstract class Memory add() sample() Brain train() predict() Data Generator Random Walks Deterministic Signals CSV Replay Market Data Streamer Single Asset Multi Asset Market Making Environment render() step() reset() next() rewind() Trading-Gym - OpenSourced
  • 18. Prediction Machines release of Trading-Gym environment into OpenSource - - demo - -
  • 19. TensorFlow TradingBrain released soon TensorFlow TradingGym available now with Brain and DQN example Prediction Machines release of Trading-Gym environment into OpenSource
  • 20. References: Insights In Reinforcement Learning (PhD thesis) by Hado van Hasselt Human-level control though deep reinforcement learning V Mnih, K Kavukcuoglu, D Silver, AA Rusu, J Veness, MG Bellemare, ... Nature 518 (7540), 529-533 Deep Reinforcement Learning with Double Q-Learning H Van Hasselt, A Guez, D Silver AAAI, 2094-2100 Prioritized experience replay T Schaul, J Quan, I Antonoglou, D Silver arXiv preprint arXiv:1511.05952 Dueling Network Architectures for Deep Reinforcement Learning Z Wang, T Schaul, M Hessel, H van Hasselt, M Lanctot, N de Freitas The 33rd International Conference on Machine Learning, 1995¨C2003
  • 22. Overview 1. DQN - DeepMind, Feb 2015 ¡°DeepMindNature¡± http://www.davidqiu.com:8888/research/nature14236.pdf a. Experience Replay b. Separate Target Network 2. DDQN - Double Q-learning. DeepMind, Dec 2015 https://arxiv.org/pdf/1509.06461.pdf 3. Prioritized Experience Replay - DeepMind, Feb 2016 https://arxiv.org/pdf/1511.05952.pdf 4. DDDQN - Dueling Double Q-learning. DeepMind, Apr 2016 https://arxiv.org/pdf/1511.06581.pdf
  • 23. Enhancements Experience Replay Removes correlation in sequences Smooths over changes in data distribution Prioritized Experience Replay Speeds up learning by choosing experiences with weighted distribution Separate target network from Q network Removes correlation with target - improves stability Double Q learning Removes a lot of the non uniform overestimations by separating selection of action and evaluation Dueling Q learning Improves learning with many similar action values. Separates Q value into two : state value and state- dependent action advantage
  • 24. Keras v Tensorflow Keras Tensorflow High level ? Standardized API ? Access to low level ? Tensorboard ?* ? Understand under the hood ? Can use multiple backends ?
  • 25. Install Tensorflow My installation was on CentOS in docker with GPU*, but also did locally on Ubuntu 16 for this demo. *Built from source for maximum speed. CentOS instructions were adapted from: https://blog.abysm.org/2016/06/building-tensorflow-centos-6/ Ubuntu install was from: https://www.tensorflow.org/install/install_sources
  • 26. Tensorflow - what is it A computational graph solver
  • 27. Tensorflow key API Namespaces for organizing the graph and showing in tensorboard with tf.variable_scope('prediction'): Sessions with tf.Session() as sess: Create variables and placeholders var = tf.placeholder('int32', [None, 2, 3], name='varname¡¯) self.global_step = tf.Variable(0, trainable=False) Session.run or variable.eval to run parts of the graph and retrieve values pred_action = self.q_action.eval({self.s_t['p']: s_t_plus_1}) q_t, loss= self.sess.run([q['p'], loss], {target_q_t: target_q_t, action: action})
  • 28. Trading-Gym https://github.com/Prediction-Machines/Trading-Gym Open sourced Modelled after OpenAI Gym. Compatible with it. Contains example of DQN with Keras Contains pair trading example simulator and visualizer
  • 29. Trading-Brain https://github.com/Prediction-Machines/Trading-Brain Two rich examples Contains the Trading-Gym Keras example with suggested structuring examples/keras_example.py Contains example of Dueling Double DQN for single stock trading game examples/tf_example.py
  • 30. References Much of the Brain and config code in this example is adapted from devsisters github: https://github.com/devsisters/DQN-tensorflow Our github: https://github.com/Prediction-Machines Our blog: http://prediction-machines.com/blog/ Our job openings: http://prediction-machines.com/jobopenings/ Video of this presentation: https://www.youtube.com/watch?v=xvm-M-R2fZY