Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function).
Peter EckersleyPublished in: CoRR (2019)
Keyphrases
- utility function
- decision theory
- expected utility
- artificial general intelligence
- risk averse
- risk aversion
- intelligent systems
- general intelligence
- minimax regret
- preference elicitation
- decision makers
- multi attribute
- human level intelligence
- artificial intelligence
- decision problems
- optimization criterion
- utility maximization
- case based reasoning
- probability distribution
- risk neutral
- makes sense
- multiattribute utility
- expert systems
- ai systems
- machine learning
- robust optimization
- computational intelligence
- multi objective
- cost function
- quasiconvex
- decision making
- cobb douglas