《ChatGPT Prompt Engineering for Developers》
ChatGPT hint project for developers
shadow
Taking advantage of the vacation, I studied the prompt course, made some simplifications and sorted out key knowledge points, and shared them with everyone.
LLM Achievable Tasks
include:
Summary (such as summarizing user reviews)
Inference (e.g. sentiment classification, topic extraction)
Transforming text (e.g. translation, rewriting)
Extensions (such as auto-composing emails)
Summarize with ChatGPT https://chirper.ai/shadowai
Inference - Sentiment Classification
Convert text and turn an esoteric article into a story suitable for children
extensions, auto-write emails, introductory articles
prompt skills
When you use a prompt to tune your LLM, consider sending the prompt to someone who is smart but doesn't understand the details of your assignment. If LLM is not working properly, sometimes it is because the prompt is not clear enough.
Principle number one: Write clear and specific instructions.
The second principle: give the model enough time to think.
Don't confuse clear prompts with short prompts, as in many cases longer prompts actually provide more clarity and context, which is beneficial for LLM to match expected output.
Principle 1: Clear and Specific Instructions
Tip 1: Use specifiers
Use delimiters to clearly indicate different parts of the input. The discriminator can be any symbol, such as ```, """, < >, <tag> </tag>, so that the model clearly knows which are independent parts to avoid hint injection.
Prompt injection refers to user instructions in the input that may contradict our instructions, causing the model to follow the user's instructions instead of our instructions.
Without discriminators, users may add irrelevant inputs, causing the model to output wrong results. Therefore, using discriminators can improve the accuracy and stability of the model.
Tip 2: Structure output
To make parsing model output easier, it may be helpful to require model output in a structured format such as HTML or JSON.
Tip 3: Whether the conditions are met
If the task has some assumptions that are not necessarily satisfied, we can tell the model to check those assumptions first, point out and stop the task if they are not satisfied.
Tip 4: Few Sample Hints
few-shot prompting. This approach is to provide examples that have successfully performed the desired task before letting the model perform the actual task.
Principle 2: Give the model time to think
If you give the model a task that is too complex, it may get incorrect results in a short period of time.
Tip 1: Complete step by step
First, we can use explicit steps to complete a task. In this example, we feed the model a paragraph containing the story of Jack and Jill, and instruct the model to complete four tasks with explicit steps:
1. First, summarize the text in one sentence
2. Second translate the overview into French
3. Then list each name in French overview
4. And output a JSON object containing two keys "French summary" and "num names".
After running this model, we can see that the model has completed these four tasks respectively, and output the results in the format we require.
Tip 2: Let the model sort out before giving conclusions
Sometimes we get better results when we explicitly instruct the model to sort out the order of things before drawing conclusions.
In this question, we ask the model to judge whether the student's answer is correct. First, we have this math problem, and then the student's solution.
Xiaobai's Prompt Getting Started Experiment Guide & Mixlab Recommendation
opus
For more tips and engineering skills, you can pay attention to the community or the knowledge planet~~