High-level preferences as positive examples in contrastive learning for multi-interest sequential recommendation.
Zizhong ZhuShuang LiYaokun LiuXiaowang ZhangZhiyong FengYuexian HouPublished in: World Wide Web (WWW) (2024)
Keyphrases
- positive examples
- negative examples
- high level
- learning process
- learning algorithm
- background knowledge
- low level
- reinforcement learning
- positive and unlabeled examples
- positive and negative
- training set
- prior knowledge
- active learning
- text classification
- user preferences
- positive and negative examples
- training data
- statistical queries
- positive training examples
- hypothesis space
- concept learning
- training examples
- unlabeled data
- model selection