Learn to correctly prompt AI to interact efficiently

✍For readers: software engineers, architects, IT professionals, designers, etc.

✍Objective of the article: to help you learn to correctly output prompt content and get the most accurate and comprehensive reply from AI

✍Column : Artificial Intelligence Tool Practice

        overview

Prompt principle

Basic Instant Works

prompt wording

concise

roles and goals

Positive and Negative Tips

Advanced Just-In-Time Engineering Strategies

I/O Prompt

Zero Sample Prompt

One-click reminder

Send less reminders

chain of thought tips

self-criticism

iteration

Collaborative Power Tips

Tips Tips

Model Boot Tips

Summarize


        With the explosive popularity of generative artificial intelligence (ChatGPT in particular), prompting has become an increasingly important skill for those in the field of artificial intelligence. Crafting hints, the mechanism for interacting with large language models (LLMs) such as ChatGPT, is not such a simple grammatical task as it might appear at first glance. After the novelty of communicating with ChatGPT for the first time, it is obvious that it takes practice and thought to quickly master it. Consequently, developing processes to create the most useful prompts possible (known as prompt engineering) has become a coveted expertise in the LLM field and beyond.

In this article, you'll learn about Instant Engineering. especially,

  • How to provide the information in the prompt that most affects the response
  • What are roles, positive and negative cues, zero-sample cues, etc.
  • How to iteratively use hints to exploit the conversational nature of ChatGPT

overview

This article is divided into three parts; they are

  • Prompt principle
  • Basic Instant Works
  • Advanced Just-In-Time Engineering Strategies
  • Powerful Tips for Collaborating with AI

Prompt principle

Rapid engineering is the most important aspect of effectively utilizing LLM and a powerful tool for customizing interactions with ChatGPT. It involves formulating clear and specific instructions or queries to derive the desired response from a language model. By carefully constructing hints, users can guide the output of ChatGPT towards their intended goals and ensure more accurate and useful responses.

There are some basic techniques to keep in mind during instant optimization for ChatGPT.

First, providing explicit instructions at the beginning of the prompt helps to set the context and define the task of the model. It is also beneficial to specify the format or type of the expected answer. Additionally, you can enhance interactions by including system messages or role-playing techniques in your prompts.

Here's an example prompt using the techniques above:

I want you to generate 10 quick dinner prep ideas for your recipe blog, each with a title and a one-sentence description of the meal. These blogs are written for an audience of parents looking for easy-to-prepare family meals. Output the result as a bulleted list.

Compare that prompt with the following:

Write 10 recipe blogs.

Intuitively, the former leads to more useful results.

Remember that you can create more productive conversations by iteratively refining and experimenting with prompts to improve the quality and relevance of your model's responses. Don't be afraid to test potential hints directly on ChatGPT.

Basic Instant Works

Now that you know what a basic hint should look like, let's explore some basic hint engineering considerations in more detail.

prompt wording

The wording of the prompt is critical as it guides the LL.M. in generating the desired output. It is important to formulate a question or statement in a way that ChatGPT can understand and respond to accurately.

For example, if a user is not an expert in a certain domain and does not know the correct terminology to express a question, ChatGPT may experience limitations in the answers they provide. This is similar to searching the web without knowing the correct keywords.

While it's clear that additional information can be used to create better prompts, it may not be obvious that being too verbose is not necessarily the best strategy in general. It's best not to think of cue wording as a separate technique, but as the thread that connects all the other techniques.

concise

The brevity of prompting is important for clarity and precision. A well-designed prompt should be concise and provide enough information for ChatGPT to understand the user's intent without being too verbose. However, it is crucial to ensure that the prompt is not too brief, which could lead to ambiguity or misunderstanding. The balance between not enough and too much can be difficult to strike. Practice is probably the best way to master this skill.

Wording and brevity in the prompt is important because it is meant to be specific.

roles and goals

In Instant Engineering, roles are those assigned for the LL.M. and the target audience. For example, if someone is interested in having ChatGPT write an outline for a blog post on machine learning taxonomy metrics, clearly stating that the LL.M. will serve as expert machine learning practitioners, and that its target audience is new to data science, it would certainly help to provide productive response. Should it be in conversational language (“ You will serve as a Realtor with 10 years of experience in the Phoenix area ”) or in a more formal fashion (“ Author: Phoenix Realtor Expert; Audience: Inexperienced Families”) Expression buyers ") can experiment in a given scenario.

Goals are closely related to roles. Explicitly stating the goal of a prompt-guided interaction is not only a good idea, but necessary. How does ChatGPT know what output to generate without it?

Here are effective tips that take roles and goals into account:

You will act as a Realtor with 10 years of experience in the Phoenix area. Your goal is to summarize the top 5 family neighborhoods in the Phoenix metro area in one paragraph. The target audience is inexperienced homebuyers.
In addition to the clearly stated roles and goals, note the relative specificity suggested by the examples above.

Positive and Negative Tips

Positive and negative cues are another set of framework methods to guide model output. Positive cues (" do this ") encourage models to include specific types of outputs and generate specific types of responses. On the other hand, negative cues (" don't do this ") prevent the model from including certain types of outputs and generating certain types of responses. Using positive and negative cues can greatly affect the direction and quality of the model output.

Consider the following example prompt:

You will act as a Realtor with 10 years of experience in the Phoenix area. Your goal is to summarize the top 5 family neighborhoods in the Phoenix metro area in one paragraph. The target audience is inexperienced homebuyers.

The framework of the above hints is positive in nature, providing guidance for what ChatGPT should generate. Let's add some wording to block certain output, both in content and format. An example of a negative cue for content guidance could be to add the following to the example above:

Do not include any neighborhoods within 5 miles of the city center or near the airport.

This extra constraint should help ChatGPT understand what output it should generate.

Advanced Just-In-Time Engineering Strategies

Let's look at some more advanced instant engineering strategies. While the previous section provided some general guidelines for interacting with LLMs, you can turn to various contemporary strategies commonly found in the toolkit of prompt engineers to be able to interact with ChatGPT in more complex ways.

I/O Prompt

An input/output prompt strategy involves defining the input that the user provides to the LLM and the output that the LLM generates in response. This strategy is critical to facilitate engineering, as it directly affects the quality and relevance of ChatGPT responses.

For example, a user might provide an input prompt asking ChatGPT to generate a Python script for a specific task, and the desired output would be the generated script.

Here's an example of the most basic strategy: Provide a single input and expect a single output.

Generates a Python script that takes a single mandatory command line argument ([project]) and performs the following tasks:
– Creates a new folder named [project]
– Creates a new folder named [project].py One file
– write a simple Python script file header to the [project].py file

Zero Sample Prompt

The zero-sample strategy involves LLM generating answers without any examples or context. This strategy can be useful when the user wants a quick answer without providing additional detail, or when the topic is so general that examples artificially limit the response. For example:

Generate 10 possible names for my new dog.

One-click reminder

One-shot strategies involve LLM generating answers based on a single example or context provided by the user. This strategy can guide ChatGPT's response and ensure that it matches the user's intent. The idea here is that an example will provide more guidance to the model than none. For example:

Generate 10 possible names for my new dog.
My favorite dog name is Banana.

Send less reminders

The few-shot strategy involves LLM generating answers based on a few examples or snippets of context provided by the user. This strategy can guide ChatGPT's response and ensure that it matches the user's intent. The idea here is that multiple examples will provide more guidance to the model than a single example. For example:

Generate 10 possible names for my new dog.
My favorite dog names include:
- Banana
- Kiwi
- Pineapple
- Coconut

As you can guess, the more examples included in the hint, the closer the generated output will be to the desired result. For a zero sample, there may not be any fruit names suggested; if it is one, there may be several; and in rare cases, the suggestions may consist entirely of fruit-themed names.

chain of thought tips

The train of thought strategy involves providing the LLM with examples that help refine the original question and ensure a more accurate and comprehensive answer. Idea hints are so-called because the hints contain some example ideas. It differs from the X-shot hinting technique because ThoughtChain hints are structured to encourage critical thinking and are designed to help discover new insights or approaches that ChatGPT may not have considered.

The technology also encourages the LL.M. to output its critical reasoning. The calling card of the Chain of Thought prompt is the phrase "Let's think step by step", which is often appended to the end of the prompt, and studies have shown that this is enough to improve the generated results.

Consider the following example of a one-time prompt:

Q: Joe has 20 eggs. He bought two more boxes of eggs. Each carton contains 12 eggs. How many eggs does Joe have now?
Answer: The answer is 44.

Q: Farmer John has 93 chickens. If he sells 20 to Farmer Bill and then buys twice as many chickens, how many chickens does Farmer John have now?

And consider ChatGPT's error output response:

The answer is 73.

Let us now look at the following chain of thinking prompts, written to encourage critical thinking in the LL.M.:

Q: Joe has 20 eggs. He bought two more boxes of eggs. Each carton contains 12 eggs. How many eggs does Joe have now? Let's think step by step.
A: Joe starts with 20 eggs. 2 boxes of 12 eggs is 24 eggs. 20 + 24 = 44. Therefore, Joe has 44 eggs and the answer is 44.

Q: Farmer John has 93 chickens. If he sells 20 to Farmer Bill and then buys twice as many chickens, how many chickens does Farmer John have now? Let's think step by step.

This time ChatGPT responded differently:

Answer: Farmer John sold 20 chickens to Farmer Bill, so he is left with 93 – 20 = 73 chickens. Then he buys twice as many chickens as he sold, so he buys 2 * 20 = 40 chickens. Adding these newly purchased chickens to his remaining sticks, Farmer John now has 73 + 40 = 113 chickens.

By providing an example of the critical thinking process required to solve this problem, and guiding it step by step, ChatGPT can apply the same reasoning to our problem and give the correct answer this time.

self-criticism

Self-critical strategies include prompting the LL.M. to assess whether its output contains potential inaccuracies or areas for improvement. This policy ensures that the information provided by ChatGPT is as accurate as possible. It helps users debug prompts and determine why they are not getting the expected results.

Examples of such prompts include:

Please re-read your reply above. Did you find any problems or errors with your reply? If so, please identify these issues or errors and make any necessary edits.

This example involves instructing ChatGPT to use self-criticism to debug its error code:

Review the code you just generated. Currently it is not running. Are you able to see any syntax errors or flow control errors that you can correct? If so, find the offending section of code and regenerate it.

Exploiting the self-healing capabilities of ChatGPT is a rapid engineering technique that cannot be ignored.

iteration

An iterative or expanding strategy involves feeding subsequent cues to the LLM based on the output of the initial cues. This involves iterating over the results by asking further questions or making additional requests based on each successive response.

For example, consider letting ChatGPT assist you in creating an outline for a book you're writing. The first hint might look like this:

I'm writing a book on the theory of time travel. I haven't settled on a specific theme yet. Generate 5 specific topic suggestions for such a book. For each proposal, please provide a title and a description of what the book will cover. This book will be aimed at casual readers.

Now, suppose one of the suggested topics is as follows:

Title: "The Time Paradox: Exploring the Complexity of Time Travel"

Summary: The Time Paradox delves into the mind-bending intricacies of time travel paradoxes, exploring the fascinating conundrums they present. This book explores various paradoxes, such as the bootstrapping paradox, the predetermined paradox, and the information paradox. ...

You can then iterate over this with subsequent hints:

I would write the book you suggested, Time Paradox: Exploring the Complexity of Time Travel. Generate a chapter outline of the book, including chapters and subsections.

An excerpt of the possible output follows:

Introduction
– Defining Time Paradoxes
– Significance of Time Paradoxes in Time Travel Narratives
– Overview of Chapter Structure

Chapter 1: The Bootstrap Paradox
1.1 The Nature of the Bootstrap Paradox
- Explaining the Concept and Its Origin
- Key Examples from Literature, Movies and Real Life Anecdotes

The iteration can then continue. Further hints might be:

Let's flesh out the introduction. Generate a two-paragraph summary of what each section of the chapter will cover. The chapter outlines of the introduction are as follows.
– – –
Introduction
– Defining Time Paradoxes
– Significance of Time Paradoxes in Time Travel Narratives
– Overview of Chapter Structure

You can see how to continue further iterations in this example. A similar iteration can be used to have ChatGPT generate code: start with an overview, iterate over overviews to generate modules, generate on modules to generate functions, and finally generate code within functions. Just as breaking down large problems into smaller, more manageable ones is often a human recipe for success, ChatGPT excels at taking larger tasks in a more tractable way.

Collaborative Power Tips

The best way to look at ChatGPT is as a junior assistant, whether it's a research assistant, a coding assistant, a problem solving assistant, or whatever you need. Recognizing and cultivating this collaborative atmosphere can lead to further success. Here are some quick tips to facilitate this collaboration.

Tips Tips

One way to improve tip making is to involve ChatGPT. Tips like these can have beneficial results:

What hints can I use now to further help you with this task?

ChatGPT should then generate suggestions of helpful hints that you can use to enhance its further responses.

Model Boot Tips

Model-led prompting involves instructing the LLM to prompt you with the information needed to complete a requested task. This is similar to telling someone "Ask me what you need to know".

I want you to write a Python program to manage my customer information, which is stored in a Google Sheet. In order to complete this task, please ask me any questions you need answered.

Letting ChatGPT decide what information is needed to perform a task is beneficial because it removes some of the guesswork and stops hallucinations. Of course, well-crafted prompts for model-guided prompts may lead you to answer many irrelevant questions from ChatGPT, so initial prompts still need to be thoughtfully written.

Summarize

Once you're familiar with the just-in-time engineering strategies outlined here, you can look to other, more complex, high-performance approaches. Some of these strategies include thought trees, reflection, and self-consistency, among others. Other strategies are being developed on a regular basis; no doubt some interesting developments have taken place between the time this article was written and the time you read it.

Remember, the whole point of instant engineering is to communicate your intentions and desires to ChatGPT in a way that the LLM can clearly and unambiguously understand, so that it can act on the request, producing results that are as close as possible to the desired output. possible. If you keep this in mind, continue to implement the strategies presented, and hone your immediate engineering skills with regular practice, you will find ChatGPT to be a genuinely useful junior assistant, willing and able to help when you need it.

As long as you ask correctly.

Guess you like

Origin blog.csdn.net/arthas777/article/details/132657432