LLM Series | 07: Wu Enda ChatGPT Prompt Course Practice: Taking Smart Customer Service Email as an Example

Introduction

Flying egrets in deserted paddy fields, twittering orioles in overcast summer trees. Hello friends, I am the editor of the WeChat public account "Xiao Chuang You Ji Machine Learning": the little girl who sells Tieguanyin.
For more and updated articles, please pay attention to the WeChat public account: Xiaochuang Youji Machine Learning . Follow-up will continue to organize a series of topics such as model acceleration, model deployment, model compression, LLM, AI art, etc. , so stay tuned.

Next to the previous ChatGPT Prompt engineering series: LLM series | 04: ChatGPT Prompt writing guide , 05: How to optimize ChatGPT Prompt? 06: ChatGPT Prompt Practice: Text Summary & Inference & . Today's small essay mainly introduces how to use ChatGPT for text expansion, and takes how to write custom emails based on customer evaluation and emotion as an example in detail.

The text expansion referred to here is to input short text, such as a set of instructions or a list of topics, into a large language model LLM, and let the model generate longer text, such as emails or papers based on a certain topic. This has some very practical uses, such as using large language models as a brainstorming tool. But there are some problems with this practice, such as the possibility that some people may use it to generate a large amount of spam. So when you use these features of large language models, please only use them responsibly and in ways that benefit people.

In this chapter, we will introduce how to generate customer service emails for each customer review based on the OpenAI API. Another input parameter to the model is used in this process: the temperature coefficient. This parameter controls how diverse the responses the model produces.

Environmental preparation

import openai
import os

openai.api_key  = "sk-xxx" 

import os
os.environ['HTTP_PROXY'] = "http://XXX:xx"
os.environ['HTTPS_PROXY'] = "http://XXXX:xx"

# 一个封装 OpenAI 接口的函数,参数为 Prompt,返回对应结果
def get_completion(prompt, model="gpt-3.5-turbo", temperature=0):
    '''
    prompt: 对应的提示
    model: 调用的模型,默认为 gpt-3.5-turbo(ChatGPT),有内测资格的用户可以选择 gpt-4
    temperature: 温度系数
    '''
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=temperature, # 模型输出的温度系数,控制输出的随机程度
    )
    # 调用 OpenAI 的 ChatCompletion 接口
    return response.choices[0].message["content"]

Customize customer service email

How to write custom email responses based on customer testimonials and sentiment? Use LLM to generate custom emails based on customer testimonials and review sentiment, and for a given customer testimonial and sentiment, generate custom responses.

First give an example, including a comment and the corresponding sentiment.

# 我们可以在推理那章学习到如何对一个评论判断其情感倾向
sentiment = "negative"

# 一个产品的评价
review = f"""
他们在11月份的季节性销售期间以约49美元的价格出售17件套装,折扣约为一半。\
但由于某些原因(可能是价格欺诈),到了12月第二周,同样的套装价格全都涨到了70美元到89美元不等。\
11件套装的价格也上涨了大约10美元左右。\
虽然外观看起来还可以,但基座上锁定刀片的部分看起来不如几年前的早期版本那么好。\
不过我打算非常温柔地使用它,例如,\
我会先在搅拌机中将像豆子、冰、米饭等硬物研磨,然后再制成所需的份量,\
切换到打蛋器制作更细的面粉,或者在制作冰沙时先使用交叉切割刀片,然后使用平面刀片制作更细/不粘的效果。\
制作冰沙时,特别提示:\
将水果和蔬菜切碎并冷冻(如果使用菠菜,则轻轻煮软菠菜,然后冷冻直到使用;\
如果制作果酱,则使用小到中号的食品处理器),这样可以避免在制作冰沙时添加太多冰块。\
大约一年后,电机发出奇怪的噪音,我打电话给客服,但保修已经过期了,所以我不得不再买一个。\
总的来说,这些产品的总体质量已经下降,因此它们依靠品牌认可和消费者忠诚度来维持销售。\
货物在两天内到达。
"""

Here we use the sentiment extraction Prompt in the text inference chapter. Here's a customer testimonial about Blender, now we'll tailor a response based on sentiment.

The instructions here are:
Suppose you are a customer service AI assistant, and your task is to send an email response for the customer. Based on the customer email separated by three backticks, generate a response thanking the customer for the review.

details as follows:

prompt = f"""
你是一位客户服务的AI助手。
你的任务是给一位重要客户发送邮件回复。
根据客户通过“```”分隔的评价,生成回复以感谢客户的评价。提醒模型使用评价中的具体细节
用简明而专业的语气写信。
作为“AI客户代理”签署电子邮件。
客户评论:
```{review}```
评论情感:{sentiment}
"""
response = get_completion(prompt)
print(response)

The generated results are as follows:

尊敬的客户,

感谢您对我们产品的评价。我们非常重视您的反馈,对于您提到的问题,我们深表歉意。我们会尽快采取措施,改进我们的产品质量和服务。

我们很高兴您对我们的交付速度感到满意。同时,我们也非常感谢您分享了您对我们产品的使用方法和技巧。我们会将这些细节纳入我们的产品使用说明中,以帮助更多的客户更好地使用我们的产品。

如果您有任何其他问题或建议,请随时联系我们的客服团队。我们将竭诚为您服务。

此致,

AI客户代理

Use temperature coefficient

Next, we will use a parameter in the language model called "temperature" (temperature coefficient), which will control the diversity of the model's responses. We can think of temperature as the degree of model exploration or randomness. In fact, it is exploration and utilization, that is, the direct balance between Exploration and Exploitation.

For example, in a particular phrase, the most likely next word for "my favorite food" is "pizza", followed by "sushi" and "taco". So at a temperature coefficient of zero, the model will always choose the most likely next word, and at a higher temperature coefficient it will also choose one of the less likely words. At a higher temperature coefficient, it might even choose "Taco," which is only one in five. As you can imagine, as the model continues to generate a final response with more words, "My favorite food is pizza" will differ from the first response "My favorite food is taco". Therefore, the two responses will become increasingly different as the model continues.

In general, setting the temperature coefficient to zero is recommended when building applications that require predictable response. In the previous small composition, the temperature coefficient has been set to zero. If you are trying to build a reliable and predictable system, you should set the temperature coefficient to zero to ensure consistent output results. If you are trying to use the model in a more creative way, perhaps needing to output different results more widely, then a larger temperature coefficient can be used.

The following sets the temperature coefficient to 0.7:

prompt = f"""
你是一名客户服务的AI助手。
你的任务是给一位重要的客户发送邮件回复。
根据通过“```”分隔的客户电子邮件生成回复,以感谢客户的评价。
如果情感是积极的或中性的,感谢他们的评价。
如果情感是消极的,道歉并建议他们联系客户服务。
请确保使用评论中的具体细节。
以简明和专业的语气写信。
以“AI客户代理”的名义签署电子邮件。
客户评价:```{review}```
评论情感:{sentiment}
"""
response = get_completion(prompt, temperature=0.7)
print(response)

The output is as follows:

尊敬的客户,

非常感谢您对我们产品的评价。我们对您的不满深表歉意。我们一直致力于提供高质量的产品和服务,但显然我们在这次失败中没有达到您的期望。

我们非常关注您提到的问题,并确保这些问题得到解决。我们建议您联系我们的客户服务,以寻求进一步的帮助,并希望我们能够取得您的信任和满意。

再次感谢您对我们产品的反馈,我们会尽全力改进和提高我们的服务质量。祝您生活愉快!

此致,

AI客户代理

At a temperature coefficient of zero, each time the same prompt is executed, the output should be the same for each run. And with a temperature of 0.7, you'll get a different output each time.

So, you can see it's different from the email we got before. Let's execute it again to show that we'll get a different email again.

Therefore, it is recommended to try changing the value of the temperature coefficient to see how the output changes. In summary, at larger temperature coefficients, the output of the model is more random. Think of it almost as AI assistants being more distracting, but perhaps more creative, at a larger temperature coefficient.

Guess you like

Origin blog.csdn.net/ljp1919/article/details/131118024