Illustration of 10 Practical Prompt Skills

Collected and sorted out 10 practical tips of prompt engineering, explaining their main principles graphically.

This article seeks to approach the first principles of these methods in a minimalist style, translates black words into human words, and uses picture examples to illustrate.

At the same time, I also added some of my own understanding. If there is any discrepancy, please correct me.

262f4e7a7a027d61e535a14e23ee166a.png

One, Structured Prompt (structured prompt words)

Clear prompt words can be designed according to the structure of prompt = role + task + requirement + prompt.

Simply put, this structure is to tell chatgpt:  who are you? what are you up to? What to do? How to do it?

6c2cfe4a37e4c32fbe7b0c60682edbc0.png

Two, Prompt Creator (prompt word generator)

Simply put, let ChatGPT act as a prompt word generation expert to help you complete/perfect/improve your prompt.

1e02fee8d9cb3c9240aee70ec3ef7209.png

Three, One/Few Shot Prompt (single sample/few sample prompt)

No example: zero shot; one shot for 1 example; several examples: few shot;

If you have a lot of examples, try finetune model weights.

436cd2776c8f88c9efb8a865bbf61975.png

Four, COT (Chain of Thought, chain of thought)

In the example of the few shot prompt, the thinking chain is given, so that the model learning not only outputs the result but also gives the thinking process. Can significantly improve the performance of LLM.

5e7936410c7eed1b226403518688836a.png

Five, Self-Consistency COT (consistency chain of thought)

Adjust the temprature to be greater than 0, such as 0.4. Then let the model answer a few more times and vote on the answer results, which can significantly improve COT.

4945a1ea6bd15b3c3bf1f00f185f0978.png

6. Zero-Shot COT (Zero-Shot Chain of Thought)

No examples are provided, just adding " Let's think step by step (Let's think step by step) " at the end of the prompt can achieve an effect close to COT.

433748beecd29e16edfbb51105fe32fd.png

Also try: Let's work this out in a step by step way to be sure we have the right answer. Let's work this out in a step by step way to be sure we have the right answer.

According to tests this spell works better.

f7c303242e238486a4c3ad0ce78f20ee.png

Seven, Self-ask Prompt (self-asking)

In the prompt paradigm, LLM is guided to split a complex question into simple sub-questions, answer them one by one, and then aggregate them into answers.

The effect is somewhat similar to the COT thinking chain, but at the same time, LLM is required to ask sub-questions and give answers, which has greater constraints on the generated content, and sometimes the effect is better.

84698b203b5ecd3b27128929abc97fa1.png

Eight, ReACT (Reaon+Act collaborative thinking and action)

Follow the pattern of think (thinking)->act (action)->observation (observation)->think→act→observation... to solve the problem.

ReACT is implemented in the paradigm of reinforcement learning, and it is necessary to define an interactive environment env.

The intelligent agent is LLM. act is to interact with the environment (such as querying the Internet, calling tools, executing code, etc.).

02888eeb519d796724003cb21a07b956.png

AutoGPT is also the product of this reinforcement learning paradigm prompt. The main prompt modes designed by AutoGPT are as follows:

Thoughts (current thinking) -> Reasoning (reasoning process -> Plan (follow-up plan) -> Criticism (self-critical examination) -> Next action (next action)

Nine, Reflexion (self-reflection after failure)

Solve the problem according to the mode of task->try->assessment->if it fails, reflect on the failure reason->try again→...

Adding the Reflection step can significantly increase the success rate. The authors suggest that the reflective step can help LLM build long-term memories or experiences.

Reflection is also implemented in the reinforcement learning paradigm. It needs to define an interactive environment env, which is from the same group of authors as ReACT.

514d50d880e4c1ca36d1655d87ca4258.png

Ten, Langchain

Make the local document into a knowledge base, query the most relevant knowledge content according to the similarity of the text emedding vector according to the query question, and stitch it into the prompt according to the template.

The core technology is Embedding algorithm and vector database query.

d7edb1a51a06ba7c7c22f6e30e707f9e.png


The backstage of the public account Algorithm Gourmet House replies to the key words: chatgpt , get the source code of this notebook and share more chatgpt-related prompt tips~

Guess you like

Origin blog.csdn.net/Python_Ai_Road/article/details/131266771