Fuzzy Q-Learning in SVD Reduced Dynamic State-Space

Authors

  • Szilveszter Kovács
  • Péter Baranyi

Keywords:

Reinforcement Learning, Fuzzy Q-Leaming, Singular Value Decomposition

Abstract

Reinforcement Learning (RL) methods, surviving the control difficulties of the unknown environment, are gaining more and more popularity recently in the autonomous robotics community. One of the possible difficulties of the reinforcement learning applications in complex situations is the huge size of the state-value- or action-valuefunction representation [17]. The case of continuous environment (continuous valued) reinforcement learning could be even complicated, as the state-value- or action-valuefunctions are turning into continuous functions. In this paper we suggest a way for tackling these difficulties by the application of SVD (Singular Value Decomposition) methods [6], [19], [20],

Downloads

Published

2003-12-30