Prompt Engineer Guide [Application]: Prompt Application, ChatGPT|Midjouney Prompt Engineering

Prompt Engineer Guide [Application]: Prompt Application, ChatGPT|Midjouney Prompt Engineering

insert image description here

1.ChatGPT Prompt Engineering

theme:

Conversation
Python Notebook with ChatGPT

Topics:


Introduction to ChatGPT

ChatGPT is a new type of model trained by OpenAI for conversational interaction. The model is trained to provide appropriate responses in the context of a conversation, following the instructions in the prompt. ChatGPT can help answer questions, suggest recipes, write lyrics in a certain style, generate code, and more.

ChatGPT is trained using reinforcement learning with human feedback (RLHF). While this model is more capable than previous GPT versions (and also trained to reduce harmful and inauthentic output), it still has limitations. Let's understand some of the capabilities and limitations with a concrete example.

You can use ChatGPT's research preview here , but in the examples below we will use the "Chat" mode on the OpenAI Playground.


1.1 Review dialogue tasks

In one of the previous guides , we covered a bit about conversation capabilities and role prompting. We covered how to instruct the LLM to have a conversation in a specific style, with a specific intent, behavior, and identity. In the
previous guide , We covered a bit about dialogue abilities and character hints. We explain how to instruct the LLM to engage in conversations with a specific style, intent, behavior and identity.

Let's go back to our basic example from before, where we created a dialogue system capable of generating more technical and scientific responses.

Prompt:

以下是与AI研究助手进行的对话。助手的语气是技术和科学的。

人类:你好,你是谁?
AI:您好!我是一个AI研究助手。我今天可以帮您什么?
人类:你能告诉我关于黑洞的形成吗?
AI:

From the example above, you can see two important components:

The intent or explanation of
who the chatbot is , which guides the chatbot in what style or tone to respond

The simple example above is adapted for use with text-davinci-003's text completion API. Recently, OpenAI announced ChatGPT APIs , a more powerful and cheaper model called gpt-3.5-turbo, dedicated to this type of functionality (chat completion). OpenAI recommends it as their best model, even for non-chat use cases. Other benefits of using ChatGPT API are significant cost reduction (90%) and efficiency.
Big players like Snap Inc. and Instacart are already integrating ChatGPT-powered conversational features on their products, ranging from personalized recommendations to open shopping destinations.


1.2 Conversation using ChatGPT

1.2.1 Multiple Rounds of Dialogue

To demonstrate the capabilities of ChatGPT, we will use the chatbot assistant example above and discuss the results. Compared to text-davinci-003, the gpt-3.5-turbo model powering ChatGPT uses the chat format as input. A model expects a sequence of messages as input, and uses those messages to generate a response.

Input:

系统: 你是一个使用技术性和科学性语气的AI研究助手。
用户: 你好,你是谁?
助手: 你好!我是一名AI研究助手。今天我能为你做些什么?
用户: 你能告诉我黑洞是如何产生的吗?

Output:

助手: 当然可以!黑洞是由于一个大质量星体耗尽燃料并在自身引力作用下崩塌形成的。这种崩塌导致星体变得极为致密,具有极强的引力场,甚至连光线都无法逃脱。这个无法回头的点称为事件视界。一旦一颗星体穿过事件视界,它就被认为是一个黑洞...

Note that in the above example, I have simplified the input and output, but the ChatGPT chat completion API requires messages to be provided in a specific format. I've added a snapshot below of what this example looks like when using Chat Mode in the OpenAI Playground:

For our example, a more formal API call might look like the following example:

import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {
    
    "role": "系统", "content": "你是一个使用技术性和科学性语气的AI研究助手。"},
        {
    
    "role": "用户", "content": "你好,你是谁?"},
        {
    
    "role": "助手", "content": "你好!我是一名AI研究助手。今天我能为你做些什么?"},
        {
    
    "role": "用户", "content": "你能告诉我黑洞是如何产生的吗?"}
    ]
)

The way developers interact with ChatGPT in the future is expected to be done through Chat Markup Language (ChatML for short).

1.2.2 Single round task

The chat format enables multiple rounds of dialogue, but it also supports single round tasks like we did with text-davinci-003. This means that we can use ChatGPT to perform tasks similar to our demonstration of the original GPT model. For example, let us try to use ChatGPT to perform the following question answering task:

Input:

USER:根据下文回答问题。保持答案简短明了。如果不确定答案,请回答“不确定答案”。

背景:Teplizumab的根源可以追溯到名为Ortho Pharmaceutical的新泽西州制药公司。在那里,科学家生成了抗体的早期版本,称为OKT3。最初从小鼠中获得,这种分子能够结合到T细胞的表面并限制它们的细胞杀伤潜力。 1986年,它获得批准,用于帮助预防肾脏移植后器官排斥,成为首个允许人类使用的治疗性抗体。

问题:OKT3最初是从哪里获得的?

回答:

Output:

ASSISTANT:小鼠。

Note that I added "USER" and "ASSISTANT" tags to better demonstrate how ChatGPT can be used to perform tasks. Here is an example using Playground:

More formally, here's the API call (I'm only including the request's message component):

CONTENT = """Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer.

Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.

Question: What was OKT3 originally sourced from?

Answer:
"""

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {
    
    "role": "user", "content": CONTENT},
    ],
    temperature=0,
)

1.2.3Instructing Chat Models

According to the official OpenAI documentation, a snapshot of the gpt-3.5-turbo model will also be available. For example, we can access the March 1st snapshot gpt-3.5-turbo-0301. This allows developers to choose a specific model version. This also means that the best practices guiding the model may change from one version to another.

The current recommended practice for gpt-3.5-turbo-0301 is to add instructions in user messages instead of available system messages.


1.3Python Notebook

describe python notebook
Learn how to use openai library to call ChatGPT API Introduction to ChatGPT API
Learn how to use the ChatGPT feature using the LangChain library. ChatGPT API with LangChain

1.4 References (English)

2.Midjouney Prompt Engineering

2.1 Technology brand illustration

**Pormpt focus**_**generic productivity illustration for + what you want to draw+ by slack and dropbox, style of behance **

image.png

_generic productivity illustration for a tech company, by slack and dropbox, style of behance _
prompt Translate a general productivity illustration for a tech company, by Slack and Dropbox, style of behance.

image.png

generic productivity illustration for a tech company, by slack and behance

image.png

Falling in love with your company culture, illustration for a tech company, by slack and dropbox, style of behance
prompt

image.png

users at a coffeeshop, illustration for a tech company, by slack and dropbox, style of behance

2.2 Design of iOS App Icons

**Prompt key points** squared with round edges mobile app logo design, flat vector app icon of a + the icon you want

image.png

squared with round edges mobile app logo design, flat vector app icon of a cute shiba inu face, minimalistic, white background
prompt icon, minimalist style, white background.

image.png

squared with round edges mobile app logo design, flat vector app icon of a rocket

minimalistic, whimage.png

te background
squared with round edges mobile app logo design, flat vector app icon of a skull, minimalistic, white background

image.png

squared with round edges mobile app logo design, flat vector app icon of a cute onigiri, minimalistic, white background

2.3 Mobile application design draft

**Prompt logic is** **photography of an iphone + the content you want to display on the iphone+ ****inspired ****by Behance and Figma and dribbble**

image.png

_photography of an iphone Narrow Weak Beach inspired by Behance and Figma and dribbble _
prompt Translate a photo of an iPhone, inspired by Behance, Figma and dribbble, with the theme "Narrow Weak Beach (Narrow Weak Beach)".

image.png


Photography of an iphone [with a modern user interface food ordering app on the screen ] inspired by Behance and Figma and dribbble prompt Inspired by Behance, Figma and dribbble.

image.png

photography of an iphone [with a modern user interface of [vector illustration of plants] plant identification app on the screen]inspired by Behance and Figma and dribbble
Prompt [Plant Vector Illustrator]'s plant identification app. Inspired by Behance, Figma and dribbble.

2.4 Web design UI design draft

**Prompt logic is _photo of macbook M1 with [ + the content you want to display on the Mac+ ] **style of dribbble and Behance and Figma ] on the screen_

image.png

photo of macbook M1 with [with a modern user interface of plant identification app on the screen] inspired by Behance and Figma and dribbble

image.png

photo of macbook m1 with [modern web user interface of nutrition website, style of dribbble and Behance and Figma ] on the screen

image.png

photo of macbook M1 with [with a modern user interface of plant identification app on the screen] inspired by Behance and Figma and dribbble

2.5 Basic parameters

  • aspect ratio
    --aspect or --ar Change the aspect ratio of the generated image.

  • Chaos
    – chaos <number 0-100> Change the degree of diversity of the result. Higher values ​​produce more unusual and unexpected generated images.

  • A negative hint
    --no, or --no plants will attempt to remove plants from the image.

  • quality
    --quality <.25, .5, 1 or 2>, or --q <.25, .5, 1 or 2> indicates how much time you want to spend rendering quality. The default value is 1. Higher values ​​cost more, lower values ​​cost less.

  • seed
    --seed <integer between 0-4294967295> Midjourney bot uses the seed number to create a set of visual noise (like TV static) as a starting point for generating an initial grid of images. The seed number is randomly generated for each image, but can be specified with the --seed or --sameseed parameters. Using the same seed number and hint will produce a similar final image.

  • Stop
    --stop <integer between 10-100> Use the --stop parameter to finish the job halfway through the process. Stopping a job at an earlier percentage can produce blurry, less detailed results.

  • style
    –style <4a, 4b or 4c> Switch between different versions of Midjourney model version 4.

  • The stylize
    --stylize <number>, or --s <number> parameter affects how strongly Midjourney's default aesthetic is applied to the job.

  • Uplight
    --uplight Use an alternative "uplight" upscaler when the U button is selected. The result is closer to the original mesh image. The upscaled image is less detailed and smoother.

  • upbeta
    –upbeta Use an alternative beta upscaler when the U button is selected. The result is closer to the original mesh image. Upscaled images add much less detail.

3. Prompt application

In this guide, I'll cover some advanced and fun ways to perform useful higher-level tasks through prompt engineering.

Note that this section is under active development.
theme:


3.1 Generate data

LLMs have a powerful ability to generate text. Using an effective hinting strategy can guide the model to produce better, more consistent, and more realistic responses. LLMs can also be especially useful for generating data, which is useful for running various experiments. For example, we can use it to generate a quick sample of a sentiment classifier as follows:

Prompt:

生成10个情感分析的例子。这些例子被分类为正面或负面。生成2个负面例子和8个正面例子。使用以下格式的例子:
问题: <句子>
答案: <情感>

Output:

问题: 我刚刚得到了最好的消息!
答案: 正面

问题: 我们刚刚在工作中得到了加薪!
答案: 正面

问题: 我为今天取得的成就感到非常自豪。
答案: 正面

问题: 我今天过得非常开心!
答案: 正面

问题: 我真的很期待周末。
答案: 正面

问题: 我刚刚得到了最好的礼物!
答案: 正面

问题: 我现在感到非常高兴。
答案: 正面

问题: 我很幸运有这么棒的家人。
答案: 正面

问题: 外面的天气非常阴暗。
答案: 负面

问题: 我刚刚得到了一些可怕的消息。
答案: 负面

This is very useful. We will use this example for different tests in another part of the guide.


3.2 Program-assisted language model

Gao et al., (2022) proposed a method using LLMs to read natural language questions and generate programs as an intermediate reasoning step. Called Program-Assisted Language Modeling (PAL), it differs from thought-chain hints in that instead of using free-form text to obtain a solution, it offloads the solving step to a programming runtime, such as the Python interpreter.

Let's look at an example with LangChain and OpenAI GPT-3. We are interested in developing a simple application that is able to interpret questions posed and provide answers by utilizing the Python interpreter.

Specifically, we are interested in creating a function that makes it possible to use LLMs to answer questions requiring date comprehension. We'll be giving a hint to LLM with some examples, taken from here .

These are the imports we need:

import openai
from datetime import datetime
from dateutil.relativedelta import relativedelta
import os
from langchain.llms import OpenAI
from dotenv import load_dotenv

Let's do some minor configuration:

load_dotenv()

# API configuration
openai.api_key = os.getenv("OPENAI_API_KEY")

# for LangChain
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")

set model

llm = OpenAI(model_name='text-davinci-003', temperature=0)

Set prompt + question:

question = "Today is 27 February 2023. I was born exactly 25 years ago. What is the date I was born in MM/DD/YYYY?"

DATE_UNDERSTANDING_PROMPT = """
# Q: 2015 is coming in 36 hours. What is the date one week from today in MM/DD/YYYY?
# If 2015 is coming in 36 hours, then today is 36 hours before.
today = datetime(2015, 1, 1) - relativedelta(hours=36)
# One week from today,
one_week_from_today = today + relativedelta(weeks=1)
# The answer formatted with %m/%d/%Y is
one_week_from_today.strftime('%m/%d/%Y')
# Q: The first day of 2019 is a Tuesday, and today is the first Monday of 2019. What is the date today in MM/DD/YYYY?
# If the first day of 2019 is a Tuesday, and today is the first Monday of 2019, then today is 6 days later.
today = datetime(2019, 1, 1) + relativedelta(days=6)
# The answer formatted with %m/%d/%Y is
today.strftime('%m/%d/%Y')
# Q: The concert was scheduled to be on 06/01/1943, but was delayed by one day to today. What is the date 10 days ago in MM/DD/YYYY?
# If the concert was scheduled to be on 06/01/1943, but was delayed by one day to today, then today is one day later.
today = datetime(1943, 6, 1) + relativedelta(days=1)
# 10 days ago,
ten_days_ago = today - relativedelta(days=10)
# The answer formatted with %m/%d/%Y is
ten_days_ago.strftime('%m/%d/%Y')
# Q: It is 4/19/1969 today. What is the date 24 hours later in MM/DD/YYYY?
# It is 4/19/1969 today.
today = datetime(1969, 4, 19)
# 24 hours later,
later = today + relativedelta(hours=24)
# The answer formatted with %m/%d/%Y is
today.strftime('%m/%d/%Y')
# Q: Jane thought today is 3/11/2002, but today is in fact Mar 12, which is 1 day later. What is the date 24 hours later in MM/DD/YYYY?
# If Jane thought today is 3/11/2002, but today is in fact Mar 12, then today is 3/1/2002.
today = datetime(2002, 3, 12)
# 24 hours later,
later = today + relativedelta(hours=24)
# The answer formatted with %m/%d/%Y is
later.strftime('%m/%d/%Y')
# Q: Jane was born on the last day of Feburary in 2001. Today is her 16-year-old birthday. What is the date yesterday in MM/DD/YYYY?
# If Jane was born on the last day of Feburary in 2001 and today is her 16-year-old birthday, then today is 16 years later.
today = datetime(2001, 2, 28) + relativedelta(years=16)
# Yesterday,
yesterday = today - relativedelta(days=1)
# The answer formatted with %m/%d/%Y is
yesterday.strftime('%m/%d/%Y')
# Q: {question}
""".strip() + '\n'
llm_out = llm(DATE_UNDERSTANDING_PROMPT.format(question=question))
print(llm_out)
exec(llm_out)
print(born)

This program will output: 02/27/1998
y in 2001 and today is her 16-year-old birthday, then today is 16 years later. today
= datetime(2001, 2, 28) + relativedelta(years=16)

Yesterday,

yesterday = today - relativedelta(days=1)

The answer formatted with %m/%d/%Y is

yesterday.strftime(‘%m/%d/%Y’)

Q: {question}

“”".strip() + ‘\n’


```python
llm_out = llm(DATE_UNDERSTANDING_PROMPT.format(question=question))
print(llm_out)
exec(llm_out)
print(born)

This program will output:02/27/1998

Guess you like

Origin blog.csdn.net/sinat_39620217/article/details/130678848