Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes.
Cheng-Yu HsiehChun-Liang LiChih-Kuan YehHootan NakhostYasuhisa FujiiAlexander RatnerRanjay KrishnaChen-Yu LeeTomas PfisterPublished in: CoRR (2023)