MoËT: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees.
Marko VasicAndrija PetrovicKaiyuan WangMladen NikolicRishabh SinghSarfraz KhurshidPublished in: CoRR (2019)
Keyphrases
- reinforcement learning
- multi objective
- decision trees
- mixture model
- function approximation
- state space
- sufficient conditions
- domain experts
- reinforcement learning algorithms
- temporal difference
- dynamic programming
- multi agent reinforcement learning
- human experts
- expert knowledge
- differential evolution
- tree structure
- supervised learning
- classification rules
- multiple objectives
- model free
- expectation maximization
- multiobjective optimization
- expert advice
- tree nodes