GPT Best Practices-Translation Official Website

https://platform.openai.com/docs/guides/gpt-best-practices/gpt-best-practices

This guide shares strategies and tactics for getting better results from GPT. The methods described here can sometimes be combined for greater effect. We encourage experimentation to find what works best for you.

Some of the examples demonstrated here are currently only available on our most powerful models gpt-4. If you don't have access yet, gpt-4please consider joining the waitlist . In general, if you find that a GPT model fails on a task and a more powerful model is available, it's usually worth trying again with the stronger model.

https://platform.openai.com/docs/guides/gpt-best-practices/six-strategies-for-getting-better-results

https://platform.openai.com/docs/guides/gpt-best-practices/write-clear-instructions

GPT can't read your mind. Ask for a short answer if the output is too long. If the output is too simple, ask for expert-level writing. If you don't like the format, please show how you would like to see it. The less GPT can guess about what you want, the more likely you are to get it.

Strategy:

https://platform.openai.com/docs/guides/gpt-best-practices/provide-reference-text

GPT can confidently fabricate fake answers, especially when asked about esoteric topics or citations and URLs. Just like a note can help a student do better on a test, providing GPT with reference text can help answer with less fabrication.

Strategy:

https://platform.openai.com/docs/guides/gpt-best-practices/split-complex-tasks-into-simpler-subtasks

Just as it is good practice in software engineering to decompose a complex system into a set of modular components, so are tasks submitted to GPT. Complex tasks tend to have higher error rates than simple tasks. Furthermore, complex tasks can often be redefined as workflows of simpler tasks, where the outputs of earlier tasks are used to build the inputs of subsequent tasks.

Strategy:

https://platform.openai.com/docs/guides/gpt-best-practices/give-gpts-time-to-think

If asked to multiply 17 by 28, you might not know it right away, but you can still figure it out over time. Likewise, GPT makes more inference errors when trying to answer immediately rather than taking the time to figure it out. Asking for a sequence of reasoning before answering can help GPT more reliably reason about the correct answer.

Strategy:

https://platform.openai.com/docs/guides/gpt-best-practices/use-external-tools

It compensates for the weaknesses of other tools by feeding GPT with their output. For example, a text retrieval system can tell GPT about relevant documents. A code execution engine helps GPT do math and run code. If a task can be done more reliably or efficiently with a tool other than GPT, uninstall it to get the best of both worlds.

Strategy:

https://platform.openai.com/docs/guides/gpt-best-practices/test-changes-systematically

Improving performance is easier if you can measure it. In some cases, modifications to hints lead to better performance on a few isolated examples, but lead to overall worse performance on a more representative set of examples. Therefore, to ensure that changes have a positive impact on performance, it may be necessary to define a comprehensive test suite (also called "evaluation").

Tactics:

https://platform.openai.com/docs/guides/gpt-best-practices/tactics

Each of the strategies listed above can be instantiated with a specific strategy. These strategies are meant to provide ideas for things to try. They are by no means comprehensive, and feel free to experiment with ideas not shown here.

https://platform.openai.com/docs/guides/gpt-best-practices/strategy-write-clear-instructions

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-include-details-in-your-query-to-get-more-relevant-answers

To get a highly relevant response, make sure the request provides any important details or context. Otherwise, you leave it to the model to guess what you mean.

worse

better one

How to add numbers in Excel?

How to add a row of dollar amounts in Excel? I'd like to do this automatically for the entire sheet of rows, with all totals ending up in a column on the right called "Total".

Who is the president?

Who will be the president of Mexico in 2021 and how often will the elections be held?

Write code to calculate the Fibonacci sequence.

Write a TypeScript function to efficiently compute the Fibonacci sequence. Feel free to comment the code to explain what each part does and why it is written the way it is.

Summarize meeting minutes.

Summarize the meeting minutes in one paragraph. Then write down the speaker's markdown list and each of their main points. Finally, list the next steps or action items suggested by the speaker, if any.

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-ask-the-model-to-adopt-a-persona

System messages can be used to specify the role a model uses in its replies.

system

When I ask for help writing something, you respond with a document that contains at least one joke or witty comment per paragraph.

user

Writing a thank you note to my steel bolt supplier for on time delivery at short notice. This enables us to deliver important orders.

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-use-delimiters-to-clearly-indicate-distinct-parts-of-the-input

Delimiters such as triple quotes, XML tags, section headings, etc. can help to distinguish the parts of text to be treated differently.

user

Summarize text delimited by triple quotes with a haiku. """Insert text here"""

open in the playground

system

You'll get a pair of articles (separated by XML tags) on the same topic. First summarize the arguments of each article. Then point out which of them makes a better argument and explain why.

user

<article>Insert first article here</article> <article>Insert second article here</article>

open in the playground

system

You will be given an abstract of the paper and a suggested title. The essay title should give the reader a good idea of ​​the topic of the essay, but should also be compelling. If the title does not meet these criteria, 5 alternatives are suggested.

user

Abstract: insert abstract here Title: insert title here

open in the playground

For simple tasks like this, using a delimiter probably won't make a difference in output quality. However, the more complex the task, the more important it is to disambiguate the task details. Don't make GPT work to understand exactly what you're asking of them.

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-specify-the-steps-required-to-complete-a-task

Some tasks are best specified as a series of steps. Writing out the steps explicitly makes it easier for the model to follow them.

system

Use the following step-by-step instructions to respond to user input. Step 1 − User will provide you text with triple quotes. Summarize the text in one sentence, prefixed with "Summary:". Step 2 - Translate the summary from Step 1 into Spanish and prefix it with "Translation:".

user

"""Insert text here"""

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-provide-examples

It is often more effective to provide general instructions that apply to all examples than to demonstrate all permutations of tasks by example, but in some cases it may be easier to provide examples. For example, if you intend for your model to replicate a specific style that is hard to describe explicitly in response to a user query. This is called a "few" hint.

system

Answer in a consistent style.

user

teach me patience.

assistant

The river that carves the deepest valley flows from a gentle spring; the greatest symphony begins with a single note; the most intricate tapestry begins with a single thread.

user

Teach me about the ocean.

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-specify-the-desired-length-of-the-output

You can ask the model to produce an output with a given target length. The target output length can be specified in terms of counts of words, sentences, paragraphs, bullet points, etc. Note, however, that instructing the model to generate a specific number of words does not achieve high accuracy. The model more reliably produces output with a certain number of paragraphs or bullet points.

user

Summarizes the triple-quote-delimited text in about 50 words. """Insert text here"""

open in the playground

user

Summarizes the text delimited by triple quotes in 2 paragraphs. """Insert text here"""

open in the playground

user

Summarizes the text delimited by triple quotes in 3 bullet points. """Insert text here"""

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/strategy-provide-reference-text

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-instruct-the-model-to-answer-using-a-reference-text

If we can provide the model with credible information relevant to the current query, then we can instruct the model to use the provided information to write its answer.

system

Answer the questions using the provided passages delimited by triple quotes. If you can't find the answer in the passage, write "I can't find the answer."

user

<insert article, each article separated by triple quotes> Question: <insert question here>

open in the playground

Given GPT's limited context window, in order to apply this strategy, we need some way to dynamically find information relevant to the question being asked. Embeddings can be used to achieve efficient knowledge retrieval. See strategy "Using Embedding-Based Search for Efficient Knowledge Retrieval" for more details on how this can be achieved .

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-instruct-the-model-to-answer-with-citations-from-a-reference-text

If the input is supplemented with relevant knowledge, the model can be directly asked to add citations to its answers by citing passages from the provided documents. Note that citations in the output can be verified programmatically by string matching in the provided documentation.

system

You will get a file separated by triple quotes and a question. Your task is to answer the question using only the documents provided, citing passages from the documents used to answer the question. If the document does not contain the information needed to answer the question, simply write: "Insufficient information". If an answer to a question is provided, it must be annotated with a citation. Use the following format to cite related passages ({"citation": ...}).

user

"""<insert document here>""" Question: <insert question here>

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/strategy-split-complex-tasks-into-simpler-subtasks

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-use-intent-classification-to-identify-the-most-relevant-instructions-for-a-user-query

For tasks that require a large number of independent instruction sets to handle different situations, it may be beneficial to first classify query types and use that classification to determine which instructions are needed. This is achieved by defining fixed classes and hard-coded instructions related to handling tasks in a given class. This process can also be applied recursively to decompose a task into a series of stages. The advantage of this approach is that each query will contain only the instructions needed to execute the next stage of the task, which reduces the error rate compared to executing the entire task with a single query. This also keeps costs down, as larger tips are more expensive to run ( see pricing info ).

For example, suppose for a customer service application, queries can be classified as follows:

system

You will receive customer service inquiries. Divide each query into major and minor categories. Provides output in json format with keys: primary and secondary. Primary Category: Billing, Technical Support, Account Management, or General Inquiry. Billing Sub-Category: - Cancel Subscription or Upgrade - Add Payment Method - Billing Description - Dispute Charge Technical Support Sub-Category: - Troubleshooting - Device Compatibility - Software Updates Account Management Sub-Category: - Password Reset - Update Personal Information - Closing Account - Account Security General Inquiry Secondary Category: - Product Information - Pricing - Feedback - Chatting with People

user

I need to get my internet working again.

open in the playground

Based on the classification of customer queries, a more specific set of instructions can be provided to the GPT model to handle the next steps. For example, suppose a customer needs help with "troubleshooting."

system

You will receive customer service inquiries that require troubleshooting in a technical support environment. Help users by: - ​​Asking them to check that all cables going to and from the router are connected. Note that it is very common for cables to loosen over time. - If all cables are connected and the problem persists, ask what router model they are using - Now you will advise them how to reboot their device: -- If the model is MTD-327J, advise them to press the red button and hold it for 5 seconds, then wait 5 minutes before testing the connection. -- If the model is MTD-327S, suggest they unplug and replug, then wait 5 minutes before testing the connection. - If the customer's problem persists after restarting the device and waiting 5 minutes, please connect them to IT support by outputting {"IT support requested"}.

user

I need to get my internet working again.

open in the playground

Note that the model has been instructed to emit special strings to indicate when the dialog state changes. This allows us to turn our system into a state machine, where the state decides which instructions to inject. By keeping track of state, which instructions are associated with that state, and optionally which state transitions are allowed from that state, we can put in guardrails for user experience that would be difficult to achieve with a less structured approach.

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-for-dialogue-applications-that-require-very-long-conversations-summarize-or-filter-previous-dialogue

Since GPT has a fixed context length, a conversation between a user and an assistant whose entire conversation is contained within the context window cannot continue indefinitely.

There are various solutions to this problem, one of which is to summarize previous turns in the dialogue. Once the size of the input reaches a predetermined threshold length, this may trigger a query that summarizes parts of the conversation, and summaries of previous conversations may be included as part of system messages. Alternatively, previous conversations can be summed up asynchronously in the background throughout the conversation.

Another solution is to dynamically select the previous part of the conversation that is most relevant to the current query. See strategy "Using Embedding-Based Search for Efficient Knowledge Retrieval" .

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-summarize-long-documents-piecewise-and-construct-a-full-summary-recursively

Since GPTs have a fixed context length, they cannot be used to summarize text longer than the context length minus the length of the summary generated in a single query.

To summarize a very long document, such as a book, we can use a series of queries to summarize each part of the document. Section summaries can be concatenated and summarized to generate summaries of summaries. This process can be done recursively until the entire document is summarized. If it is necessary to use information about an earlier section to understand a later one, another useful technique is to include a running summary of the text before any given point in the book, summarizing what was said at that point. OpenAI has studied the effectiveness of this procedure for summarizing books in a previous study using a variant of GPT-3.

https://platform.openai.com/docs/guides/gpt-best-practices/strategy-give-gpts-time-to-think

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-instruct-the-model-to-work-out-its-own-solution-before-rushing-to-a-conclusion

Sometimes we get better results when we explicitly instruct the model to reason from first principles before drawing conclusions. For example, suppose we want a model that evaluates students' solutions to math problems. The most obvious way to solve this problem is to simply ask the model student whether the solution is correct.

system

Determine whether the student's answer is correct.

user

Problem statement: I'm building a solar installation and I need help with figuring out the finances. - Land costs $100/sqft - I can buy solar panels for $250/sqft - I negotiated a maintenance contract that will cost me $100K per year plus $10/sqft What is the total cost for the first year of operation as a function of square footage. Student Solution: Let x be the installation size in square feet. 1. Land cost: 100x 2. Solar panel cost: 250x 3. Maintenance cost: 100,000 + 100x Total cost: 100x + 250x + 100,000 + 100x = 450x + 100,000

assistant

The student's answer is correct.

open in the playground

But the student's solution is actually incorrect! We can make the model successfully notice this by prompting the model to generate its own solution first.

system

Start by coming up with your own solution to the problem. Then compare your solution to the student's solution and assess whether the student's solution is correct. Don't judge the student's solution until you have completed the problem yourself.

user

Problem statement: I'm building a solar installation and I need help with figuring out the finances. - Land costs $100/sqft - I can buy solar panels for $250/sqft - I negotiated a maintenance contract that will cost me $100K per year plus $10/sqft What is the total cost for the first year of operation as a function of square footage. Student Solution: Let x be the installation size in square feet. 1. Land cost: 100x 2. Solar panel cost: 250x 3. Maintenance cost: 100,000 + 100x Total cost: 100x + 250x + 100,000 + 100x = 450x + 100,000

assistant

Let x be the installation dimension in square feet. 1. Land cost: 100x 2. Solar panel cost: 250x 3. Maintenance cost: 100,000 + 10x Total cost: 100x + 250x + 100,000 + 10x = 360x + 100,000 The student's answer is wrong. They made a mistake in their maintenance cost calculation, using 100x instead of 10x. The correct total cost of operation for the first year is 360x + 100,000.

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-use-inner-monologue-or-a-sequence-of-queries-to-hide-the-model-s-reasoning-process

The preceding strategies show that it is sometimes important for a model to reason about a question in detail before answering a specific question. For some applications, the reasoning a model uses to arrive at a final answer is not appropriate to share with users. For example, in a tutoring application, we might want to encourage students to find out the answers themselves, but the model's reasoning about the students' answers might reveal the answers to the students.

Internal monologue is one strategy you can use to alleviate this situation. The idea of ​​an inner monologue is to instruct the model to put parts of the output that should be hidden from the user into a structured format so that they can be easily parsed. Then parse the output and make only part of it visible before presenting it to the user.

system

Follow the steps below to answer user queries. Step 1 - Find your own solution to the problem first. Do not rely on the student's solution as it may not be correct. Enclose all assignments for this step in triple quotes ("""). Step 2 - Compare your solution with the student's solution and evaluate if the student's solution is correct. Enclose all assignments in this step in triple quotes ("")"). Step 3 - If a student makes a mistake, determine what hints you can give the student without giving an answer. Enclose all work for this step in triple quotes ("""). Step 4 - If the student makes a mistake, provide the student with a hint from the previous step (outside the triple quotes). Instead of writing "Step 4 - ..." write "Hint:".

user

Problem Statement: <insert problem statement> Student Solution: <insert student solution>

open in the playground

Alternatively, this can be achieved with a series of queries, where all but the last one hide their output from the end user.

First, we can let the model solve the problem by itself. Since this initial query does not require a student's solution, it can be omitted. This provides the added advantage that the model's solution will not be biased by the student's attempted solutions.

user

<insert problem statement>

open in the playground

Next, we can have the model use all available information to evaluate the correctness of the student's solution.

system

Compare your solution to the student's solution and assess whether the student's solution is correct.

user

Problem statement: """<insert problem statement>""" Your solution: """<insert model-generated solution>"""Student's solution: """<insert student's solution>"""

open in the playground

Finally, we can have the model use its own analysis to construct a helpful mentor role response.

system

You are a math teacher. If a student makes a mistake, provide a hint to the student in a way that does not reveal the answer. If the student didn't make a mistake, just give them an encouraging comment.

user

Problem statement: """<insert problem statement>""" Your solution: """<insert model generated solution>"""Student's solution: """<insert student's solution>"""Analysis: """<insert model generated analysis from previous step>"""

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-ask-the-model-if-it-missed-anything-on-previous-passes

Suppose we are using a model to list excerpts from sources relevant to a particular question. After listing each excerpt, the model needs to decide if it should start writing another excerpt or if it should stop. If the source document is large, the model often stops prematurely and fails to list all relevant excerpts. In such cases, better performance can often be obtained by hinting the model with a subsequent query to find any excerpts it missed in previous passes.

system

You will get a file delimited by triple quotes. Your task is to select excerpts that are relevant to the question: "What major paradigm shifts have occurred in the history of artificial intelligence." Make sure the excerpts contain all the relevant context needed to explain them -- in other words, don't extract small snippets that lack important context. Provides output in JSON format as follows: [{"excerpt": "..."}, ... {"excerpt": "..."}]

user

"""<insert document here>"""

assistant

[{"excerpt": "The model wrote an excerpt here"}, ... {"extract": "The model wrote another excerpt here"}]

user

Are there more relevant excerpts? Be careful not to repeat excerpts. Also make sure that the excerpts contain all the relevant context needed to explain them - in other words, don't extract small snippets that lack important context.

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/strategy-use-external-tools

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-use-embeddings-based-search-to-implement-efficient-knowledge-retrieval

The model can leverage external sources of information if provided as part of the input. This can help the model generate more informed and up-to-date responses. For example, if a user asks a question about a particular movie, it might be useful to add high-quality information about the movie (e.g. actors, director, etc.) to the model's input. Embeddings can be used to enable efficient knowledge retrieval to dynamically add relevant information to model inputs at runtime.

A text embedding is a vector that measures the relatedness between strings of text. Similar or related strings will be closer together than unrelated strings. This fact and the existence of fast vector search algorithms mean that embeddings can be used to achieve efficient knowledge retrieval. In particular, a text corpus can be split into chunks, and each chunk can be embedded and stored. A given query can then be embedded and a vector search performed to find the most relevant (i.e. closest in embedding space) chunk of embedded text from the corpus to the query.

An example implementation can be found in the OpenAI Cookbook . For an example of how knowledge retrieval can be used to minimize the chance of a model making up the wrong facts, see the strategy "Instructing the model to use retrieved knowledge to answer queries".

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-use-code-execution-to-perform-more-accurate-calculations-or-call-external-apis

GPT cannot be relied upon to perform arithmetic or long calculations accurately on its own. Where desired, the model can be instructed to write and run code instead of doing its own calculations. In particular, the model can be instructed to put the code to run into a specified format, such as triple backtics. After the output is generated, the code can be extracted and run. Finally, if necessary, the output of the code execution engine (ie, the Python interpreter) can be provided as input to the model for the next query.

system

You can write and execute Python code by enclosing it with triple backticks, for example, ```code goes here```. Use it to perform calculations.

user

Find all real-valued roots of the following polynomial: 3*x**5 - 5*x**4 - 3*x**3 - 7*x - 10.

open in the playground

Another good use case for code execution is calling external APIs. If the model is taught to use the API correctly, it can write code that uses it. Models can be taught how to use the API by providing them with documentation and/or code samples that illustrate how to use the API.

system

You can write and execute Python code by enclosing it in triple backticks. Also note that you have access to the following module to help users message their friends: ```python import message message.write(to="John", message="Hey, want to meet up after get off work?")`` `

open in the playground

WARNING: Executing model-generated code is not inherently safe, and any application attempting to do so should take precautions. In particular, a sandboxed code execution environment is needed to limit the damage that untrusted code can do.

https://platform.openai.com/docs/guides/gpt-best-practices/strategy-test-changes-systematically

Sometimes it's hard to tell whether a change (for example, a new instruction or a new design) will make your system better or worse. Looking at a few examples may give a hint as to which is better, but with a small sample size it can be difficult to distinguish real improvement from random luck. Perhaps this variation helps performance on some inputs, but hurts performance on others.

Evaluation procedures (or "evals") are useful for optimizing system designs. Good reviews are:

  • representative of real-world use (or at least diverse)
  • Include many test cases for greater statistical power (see table below for guidelines)
  • Easy to automate or repeat

difference to detect

Sample size required for 95% confidence

30%

~10

10%

~100

3%

~1,000

1%

~10,000

The evaluation of the output can be done by a computer, a human, or a hybrid. Computers can use objective criteria (e.g., questions with a single correct answer) as well as some subjective or ambiguous criteria for automatic evaluation, where model outputs are evaluated by other model queries. OpenAI Evals is an open source software framework that provides tools for creating automated evaluations.

Model-based evaluation can be useful when there is a range of outputs that might be considered to be of equal quality (for example, for questions with long answers). The line between what can actually be evaluated using model-based evaluation and what needs to be evaluated by a human is blurry and constantly changing as models become more powerful. We encourage experimentation to figure out how well model-based evaluation works for your use case.

https://platform.openai.com/docs/guides/gpt-best-practices/tactic-evaluate-model-outputs-with-reference-to-gold-standard-answers

Suppose that the correct answer to a known question should refer to a specific set of known facts. We can then use model queries to calculate how many of the desired facts are contained in the answer.

For example, use the following system message:

system

You will get the text delimited by triple quotes which should be the answer to the question. Check if the following information is included directly in the answer: - Neil Armstrong was the first man to walk on the moon. - The date Neil Armstrong first stepped on the moon was July 21, 1969. For each of these points, do the following steps: 1 - Reiterate the point. 2 - Quote the answer that comes closest to this. 3 - Consider whether someone reading the citation but not knowing the subject matter can directly infer this. Explain why or why not before making up your mind. 4 - If the answer to 3 is "Yes", write "Yes", otherwise write "No". Finally, provide a count of how many "yes" answers there were. Provide this count as {"count": <insert count here>}.

Here is an example input that satisfies two points:

system

<insert system message above>

user

"""Neil Armstrong is famous for being the first human to step on the Moon. This historic event took place on July 21, 1969, during the Apollo 11 mission."""

open in the playground

Here's an example input where only one point is satisfied:

system

<insert system message above>

user

"""Neil Armstrong made history by stepping off the lunar module, becoming the first man to step on the Moon."""

open in the playground

Here's an example input that doesn't satisfy:

system

<insert system message above>

user

"""In the summer of '69, the grand voyage of Apollo 11, as bold as the legendary hand. Armstrong took one step, and history unfolded, "one small step," he said, for a new world."""

open in the playground

There are many possible variants of this model-based assessment. Consider the following variant, which tracks the type of overlap between the candidate answer and the gold standard answer, and tracks whether the candidate answer contradicts any part of the gold standard answer.

system

Use the following steps to respond to user input. Recap each step completely before proceeding. That is, "Step 1: Why...". Step 1: Step-by-step reasoning about whether the information in the submitted answer compared to the expert answer is: disjoint, equal, subset, superset or overlapping (i.e. some intersection but not a subset/superset). Step 2: Step through reasoning whether the submitted answer contradicts any aspect of the expert's answer. Step 3: Output a JSON object with the following structure: {"type_of_overlap": "disjoint" or "equal" or "subset" or "superset" or "overlapping", "contradiction": true or false}

Here's an example input with a substandard answer, but one that doesn't contradict the expert answer:

system

<insert system message above>

user

Question: """What is Neil Armstrong's most famous event? What date did it happen? Assuming UTC time."""Submitted Answer: """Didn't he walk on the moon?""Expert Answer: """Neil Armstrong is best known for being the first man to walk on the moon. This historic event occurred on July 21, 1969."""

open in the playground

Here's an example input whose answer directly contradicts the expert answer:

system

<insert system message above>

user

Question: """What is Neil Armstrong's most famous event? What date did it occur? Assuming UTC time."""Submitted answer: """On July 21, 1969, Neil Armstrong became the second man to walk on the moon on this path, after Buzz Aldrin."""Expert Answer: """Neil Armstrong is best known for being the first man to walk on the moon. This historic event occurred on July 21, 1969 December 21."""

open in the playground

Here's an example input with the correct answer, which also provides more detail than necessary:

system

<insert system message above>

user

Question: """What is Neil Armstrong's most famous event? What date did it occur? Assuming UTC time."""Submitted Answer: """On July 21, 1969 at approximately 02:56 UTC, Neil Armstrong marked a great achievement in human history by becoming the first man to step on the surface of the Moon."""Expert Answer:"""Neil Armstrong is best known for being the first man to walk on the Moon. This historic event occurred On July 21, 1969."""

open in the playground

https://platform.openai.com/docs/guides/gpt-best-practices/other-resources

For more inspiration, visit the OpenAI Cookbook , which includes sample code and links to third-party resources such as:

Guess you like

Origin blog.csdn.net/dalinsi/article/details/131246332