Using Better LLMs to Teach Lesser LLMs: Knowledge Distillation via Dynamic in-context Prompting for LLM based Customer Service
Prof. Tong Wang
Assistant Professor of Marketing
Yale School of Management
Yale University
ABSTRACT
The rapid development of large language models (LLMs) presents both opportunities and challenges in deploying them in goal-oriented dialogues for complex human interactions, such as customer support and persuasion. Advanced LLMs like GPT-4 excel in these domains but are too large and cost-prohibitive, while smaller and more economical models like LLaMa 2 offer limited performance. This paper proposes a novel approach to enhance the capabilities of smaller LLMs by leveraging the strategic prowess of their more advanced counterparts. Unlike traditional methods that focus on direct response learning, we introduce a strategy-centric imitation learning framework. Here, the advanced LLM acts as a teacher, imparting strategic thinking to the prompts of a lesser LLM and refining it iteratively until the student mimics the teacher effectively. We design an iterative process which alternates between scenario generation and strategy learning and returns a customized library of various scenarios and the optimized strategies. Crucially, our approach requires only black-box access to the models, facilitating easier integration across different platforms without the need for direct parameter manipulation. This strategy not only improves the functional capacity of smaller LLMs but also contributes to broader AI safety and interpretability by enabling the scrutiny of learned strategies by domain experts. The results indicate significant potential for using strategic knowledge transfer in real-world applications, enhancing the utility of LLM deployments in cost-sensitive environments.