RL-Routing: An SDN Routing Algorithm Based on Deep Reinforcement Learning

Yi Ren Chen, Amir Rezapour*, Wen Guey Tzeng, Shi-Chun Tsai

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


Communication networks are difficult to model and predict because they have become very sophisticated and dynamic. We develop a reinforcement learning routing algorithm (RL-Routing) to solve a traffic engineering (TE) problem of SDN in terms of throughput and delay. RL-Routing solves the TE problem via experience, instead of building an accurate mathematical model. We consider comprehensive network information for state representation and use one-To-many network configuration for routing choices. Our reward function, which uses network throughput and delay, is adjustable for optimizing either upward or downward network throughput. After appropriate training, the agent learns a policy that predicts future behavior of the underlying network and suggests better routing paths between switches. The simulation results show that RL-Routing obtains higher rewards and enables a host to transfer a large file faster than Open Shortest Path First (OSPF) and Least Loaded (LL) routing algorithms on various network topologies. For example, on the NSFNet topology, the sum of rewards obtained by RL-Routing is 119.30, whereas those of OSPF and LL are 106.59 and 74.76, respectively. The average transmission time for a 40GB file using RL-Routing is \text{25.2}~s. Those of OSPF and LL are \text{63}~s and \text{53.4}~s, respectively.

Original languageEnglish
Article number9171590
Pages (from-to)3185-3199
Number of pages15
JournalIEEE Transactions on Network Science and Engineering
Issue number4
StatePublished - 1 Oct 2020


  • Cognitive sdn
  • deep reinforcement learning
  • routing algorithm
  • software defined networks.

Fingerprint Dive into the research topics of 'RL-Routing: An SDN Routing Algorithm Based on Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this