Finding antecedent redundance in Fuzzy Rule Interpolation-based Q-learning
DOI:
https://doi.org/10.35925/j.multi.2019.4.56Keywords:
artificial intelligence, reinforcement learning, fuzzy rule-base reduction, antecedent redundancyAbstract
This paper introduces novel methods to improve the efficiency of the automated knowledge extraction methods used in the FRIQ-learning (Fuzzy Rule Interpolation-based Q-learning) machine learning method. For solving a given problem, the FRIQ-learning reinforcement learning method is capable of constructing a sparse fuzzy rule-base, which does not need to contain all the possible rules. Hence it is sufficient to keep only the most important rules due to the application of fuzzy rule interpolation (FRI). Finding the specific rules that are important to solve a problem is not a trivial task. Some possible strategies for removing unimportant rules from the rule-base have already been introduced, but no strategies addressing the antecedents of the rules have been developed yet. This paper introduces a solution allowing further reduction of rule-bases, thus facilitating the creation of a sparse fuzzy rule-base from which the knowledge can be directly extracted.