Incorporating Interpretable Output Constraints in Bayesian Neural Networks.
Wanqian YangLars LorchMoritz A. GrauleHimabindu LakkarajuFinale Doshi-VelezPublished in: CoRR (2020)
Keyphrases
- neural network
- desired output
- decision making
- multi layer
- bayesian inference
- fault diagnosis
- constraint satisfaction
- bayesian estimation
- network architecture
- recurrent neural networks
- self organizing maps
- back propagation
- data driven
- bayesian networks
- machine learning
- posterior probability
- maximum likelihood
- neural nets
- artificial neural networks
- posterior distribution
- geometric constraints
- feature selection