Scalable power management using multilevel reinforcement learning for multiprocessors

Gung Yu Pan*, Jing Yang Jou, Bo-Cheng Lai

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Dynamic power management has become an imperative design factor to attain the energy efficiency in modern systems. Among various power management schemes, learning-based policies that are adaptive to different environments and applications have demonstrated superior performance to other approaches. However, they suffer the scalability problem for multiprocessors due to the increasing number of cores in a system. In this article, we propose a scalable and effective online policy called MultiLevel Reinforcement Learning (MLRL). By exploiting the hierarchical paradigm, the time complexity of MLRL is O(nlg n) for n cores and the convergence rate is greatly raised by compressing redundant searching space. Some advanced techniques, such as the function approximation and the action selection scheme, are included to enhance the generality and stability of the proposed policy. By simulating on the SPLASH-2 benchmarks, MLRL runs 53% faster and outperforms the state-of-the-Art work with 13.6% energy saving and 2.7% latency penalty on average. The generality and the scalability of MLRL are also validated through extensive simulations.

Original languageEnglish
Article number33
JournalACM Transactions on Design Automation of Electronic Systems
Volume19
Issue number4
DOIs
StatePublished - 1 Jan 2014

Keywords

  • Dynamic power management
  • Multiprocessors
  • Reinforcement learning

Fingerprint Dive into the research topics of 'Scalable power management using multilevel reinforcement learning for multiprocessors'. Together they form a unique fingerprint.

Cite this