Knowledge-base reduction in the expert knowledge-included FRIQ-learning

Authors

  • Tamás Tompa University of Miskolc
  • Szilveszter Kovács University of Miskolc

DOI:

https://doi.org/10.35925/j.multi.2021.4.8

Keywords:

reinforcement learning, heuristically accelerated reinforcement learning, expert knowledgebase, knowledgebase reduction, Q-learning, fuzzy Q-learning

Abstract

The knowledge representation of reinforcement learning (RL) methods can be different, in case of the conventional Q-learning method it is a Q-table and in case of fuzzy-based RL systems it is a fuzzy rule-base. The size of the final knowledgebase (number of elements in the Q-table, number of rules in the fuzzy rule-base) to be depend on the complexity of the problem (and the dimension size), thus there may be cases when the number of elements in the final knowledge can be considered high. In the fuzzy rule-based RL systems the rule-base reduction methods can be applied to reduce the size of the complete rule-base. In the Fuzzy Rule Interpolation based Q-learning (FRIQ-learning) the rule-base reduction can be performed optionally after the learning phase. In the expert knowledge-included FRIQ-learning, due to the knowledge building method, there can be cases when rules can get close to each other. Merging those rules, which are close to each other, could significantly reduce the size of the final rule-base. The main goal of this paper is to introduce a rule-base reduction strategy of the expert knowledge-included FRIQ-learning, which is able to reduce the rule-base size during the construction (learning) phase.

Downloads

Published

2021-02-22