Planning structural inspection and maintenance policies via dynamic programming and Markov processes. Part II: POMDP implementation.
K. G. PapakonstantinouM. ShinozukaPublished in: Reliab. Eng. Syst. Saf. (2014)
Keyphrases
- markov processes
- partially observable markov decision processes
- dynamic programming
- optimal policy
- markov decision problems
- markov chain
- finite state
- state space
- reinforcement learning
- markov decision processes
- planning problems
- markov process
- dec pomdps
- partially observable
- partially observable markov decision process
- decision problems
- stochastic processes
- continuous time bayesian networks
- dynamical systems
- infinite horizon
- optimal control
- markov decision process
- continuous state
- belief state
- transition probabilities
- initial state
- predictive state representations
- belief space
- stochastic process
- machine learning
- probabilistic model
- non stationary
- steady state
- stereo matching
- decision theoretic