​
Login / Signup
Sukjin Hong
Publication Activity (10 Years)
Years Active: 2022-2024
Publications (10 Years): 8
Top Topics
Query Expansion
Language Models For Information Retrieval
N Gram
Language Modelling
Top Venues
CoRR
EMNLP
ACL (1)
EACL (Findings)
</>
Publications
</>
Janghwan Lee
,
Seongmin Park
,
Sukjin Hong
,
Minsoo Kim
,
Du-Seong Chang
,
Jungwook Choi
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment.
ACL (1)
(2024)
Janghwan Lee
,
Seongmin Park
,
Sukjin Hong
,
Minsoo Kim
,
Du-Seong Chang
,
Jungwook Choi
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment.
CoRR
(2024)
Jongwoo Ko
,
Seungjoon Park
,
Minchan Jeong
,
Sukjin Hong
,
Euijai Ahn
,
Du-Seong Chang
,
Se-Young Yun
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective.
CoRR
(2023)
Minsoo Kim
,
Sihwa Lee
,
Janghwan Lee
,
Sukjin Hong
,
Du-Seong Chang
,
Wonyong Sung
,
Jungwook Choi
Token-Scaled Logit Distillation for Ternary Weight Generative Language Models.
CoRR
(2023)
Jongwoo Ko
,
Seungjoon Park
,
Minchan Jeong
,
Sukjin Hong
,
Euijai Ahn
,
Du-Seong Chang
,
Se-Young Yun
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective.
EACL (Findings)
(2023)
Minsoo Kim
,
Sihwa Lee
,
Janghwan Lee
,
Sukjin Hong
,
Du-Seong Chang
,
Wonyong Sung
,
Jungwook Choi
Token-Scaled Logit Distillation for Ternary Weight Generative Language Models.
NeurIPS
(2023)
Minsoo Kim
,
Sihwa Lee
,
Sukjin Hong
,
Du-Seong Chang
,
Jungwook Choi
Understanding and Improving Knowledge Distillation for Quantization Aware Training of Large Transformer Encoders.
EMNLP
(2022)
Minsoo Kim
,
Sihwa Lee
,
Sukjin Hong
,
Du-Seong Chang
,
Jungwook Choi
Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders.
CoRR
(2022)