Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function).
Peter EckersleyPublished in: SafeAI@AAAI (2019)
Keyphrases
- utility function
- decision theory
- expected utility
- artificial general intelligence
- risk averse
- intelligent systems
- risk aversion
- general intelligence
- preference elicitation
- decision makers
- multi attribute
- artificial intelligence
- minimax regret
- human level intelligence
- decision problems
- optimization criterion
- multiattribute utility
- ai systems
- makes sense
- social welfare
- risk neutral
- probability distribution
- computational intelligence
- utility maximization
- machine learning
- decision theoretic
- case based reasoning
- quasiconvex