RSS Towards Data Science - Medium

To Mask or Not to Mask: The Effect of Prompt Tokens on Instruction Tuning

This article discusses the concept of prompt-loss-weight (PLW) in fine-tuning large language models (LLMs) on prompt-completion style datasets. PLW allows for a smoother, more fine-grained control over the influence of prompt tokens during the fine-tuning process. The author explores the question of whether to mask prompt tokens, and how much to weight them, by comparing fine-tuning with and without prompt masking. They also discuss the concept of generation ratio (Rg), which is the ratio of completion length to prompt length, and its relevance to instruction-tuning datasets. The article concludes with the author's experiments on the RACE dataset using a custom implementation of PLW.
towardsdatascience.com
towardsdatascience.com