How to use Python to quickly build your own ChatGPT chatbot

Original article: How to use Python to quickly build your own ChatGPT chatbot

Ever since OpenAI launched ChatGPT, the internet hasn't stopped speculating about the future of technology or humanity.

ChatGPT has become a revolutionary product that has the potential to impact almost all areas of human work.

For developers, integrating these APIs represents a new frontier of innovation.

In this article, we'll use Gradio and the OpenAI ChatGPT model to quickly build our own chatbot.

Basic introduction to Gradio

Gradio is an open source tool written in Python.

Gradio provides a convenient way for machine learning developers to share their models.

It provides a simple, user-friendly web interface to share machine learning models with everyone, anytime, anywhere.

Gradio's unique selling point is that it doesn't require developers to write Javascript, HTML, or CSS to build a web interface.

In order to build web applications, you need to be familiar with Gradio's basic building blocks.

"Gradio allows you to design web applications in two ways: Interface and Block."

Interface

It's a high-level class that lets you build components with a few lines of code.

You can build input/output components for text, images, audio and video.

This has lower design flexibility.

"A simple example of the Gradio interface."

import gradio as gr
def sketch_recognition(img):
    pass# Implement your sketch recognition model here...

gr.Interface(fn=sketch_recognition, inputs="sketchpad", outputs="label").launch()

This will create a simple web interface with an artboard as input component and a label as output component. The function sketch_recognition is responsible for the result.

picture

Block

Gradio Block provides a lower-level way to build interfaces.

"With increased flexibility, this allows developers to go deeper into building complex web interfaces."

Block has advanced features that "give you the flexibility to place components anywhere on the screen, improved data flow control, and event handlers for interactive user experiences."

import gradio as gr
 
def greet(name):
    return"Hello " + name + "!"
    
with gr.Blocks() as demo: 
    name = gr.Textbox(label="Name") 
    output = gr.Textbox(label="Output Box") 
    greet_btn = gr.Button("Greet") 
    greet_btn.click(fn=greet, inputs=name, outputs=output, api_name="greet") 
    
demo.launch()

picture

Get the API key of OpenAI

Before building the chat interface, we need access to the OpenAI API.

So, the first thing we need to do is create an OpenAi account and generate our API key.

You can click https://platform.openai.com/account/api-keys to get the api key.

Let's look at the request and response structure of the OpenAI API.

Below is an example of a typical request to the ChatGPT API to get a response.

import openai
openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)

The message is a list of dictionaries with their respective roles and their contents .

  • Preconfigured system roles provide some context for a model to behave in a specific way.

  • User Role Store User Prompt

  • The assistant role saves the responses from the model.

And this message list is responsible for maintaining the context of the conversation.

If you have API access, the model parameter can be set to "gpt-3.5-turbo" or "gpt-4".

Now let's look at our response format.

{
 'id': 'chatcmpl-6p9XYPYSTTRi0xEviKjjilqrWU2Ve',
 'object': 'chat.completion',
 'created': 1677649420,
 'model': 'gpt-3.5-turbo',
 'usage': {'prompt_tokens': 56, 'completion_tokens': 31, 'total_tokens': 87},
 'choices': [
   {
    'message': {
      'role': 'assistant',
      'content': 'The 2020 World Series was played in Arlington, Texas at the Globe Life Field.'},
    'finish_reason': 'stop',
    'index': 0
   }
  ]
}

The response is in JSON format.

Build a ChatGPT chatbot

Application front end

We'll use Gradio's Blocks class. Gradio has a pre-built chatbot component that renders a chat interface.

with gr.Blocks() as demo: 
    chatbot = gr.Chatbot(value=[], elem_id="chatbot").style(height=650)    

picture

Now, we need a text field in order to pass the hint.

Gradio has Row and Column classes that allow you to add components vertically and horizontally.

We'll add a textbox component that takes text input from the end user.

with gr.Row():   
      with gr.Column(scale=0.85):
          txt = gr.Textbox(
                show_label=False,
                placeholder="Enter text and press enter",
                ).style(container=False)

Save and reload the page. You'll see a text box below the chat interface.

picture

  • Using the gr.Row() container, we create a layout block. This creates a row for the other components to be placed horizontally in a row.

  • In line 2, we create another layout block inside the previous container using gr.Column(). Unlike Row, it stacks other components or blocks vertically.

  • Inside the column container, we define a textbox component. This will accept any text input from the user. We can configure some parameters to make it more user friendly.

  • The scale parameter inside the column container scales the components inside. A value of 0.85 means it will occupy 85% of the screen in a single line.

If you wish to add any other components, you can add them using a combination of Row and Column containers.

Let's say we add a radio button to switch between models. This can be done as follows.

with gr.Blocks() as demo:  
    radio = gr.Radio(value='gpt-3.5-turbo', choices=['gpt-3.5-turbo','gpt-4'], label='models')
    chatbot = gr.Chatbot(value=[], elem_id="chatbot").style(height=650)
    with gr.Row():
        with gr.Column(scale=0.70):
            txt = gr.Textbox(
                show_label=False,
                placeholder="Enter text and press enter, or upload an image",
            ).style(container=False) 

picture

So far, we have created the front end of the application.

application backend

With this, we have successfully built the front end of the web application.

Now, all that's left is to make it work.

The first thing we need to do is process the input.

Here a function add_text() is defined, this function will take care of formatting the message in an appropriate way.

def add_text(history, text):
    global messages  #message[list] is defined globally
    history = history + [(text,'')]
    messages = messages + [{"role":'user', 'content': text}]
    return history, ""

Here, the history parameter is a list of tuples and text is the user's input.

Next, define a function that returns the response.

def generate_response(history, model):
        global messages

        response = openai.ChatCompletion.create(
            model = model,
            messages=messages,
            temperature=0.2,
        )

        response_msg = response.choices[0].message.content
        messages = messages + [{"role":'assistant', 'content': response_msg}]
        
        for char in response_msg:
            history[-1][1] += char
            #time.sleep(0.05)
            yield history

As you can see above, we are sending the model name, message and temperature value to the OpenAI API.

We get a response. The final loop is responsible for rendering the text in order as it is received to improve the user experience.

As you can see above, we are sending the model name, message and temperature value to the OpenAI API endpoint. We get a response.

This loop is responsible for rendering text in order as it is received to improve user experience.

Next, let's add that when the user enters the submission, the method is triggered to be called.

with gr.Blocks() as demo:
    radio = gr.Radio(value='gpt-3.5-turbo', choices=['gpt-3.5-turbo','gpt-4'], label='models')
    chatbot = gr.Chatbot(value=[],elem_id="chatbot").style(height=550)
    with gr.Row():
        with gr.Column(scale=0.90):
            txt = gr.Textbox(
                show_label=False,
                placeholder="Enter text and press enter",
            ).style(container=False) 

    txt.submit(add_text, [chatbot, txt], [chatbot, txt], queue=False).then(
            generate_response, inputs =[chatbot,radio],outputs = chatbot,)
            
demo.queue()

When a user submits text, it takes a chatbot object and prompt as input. Its output is then sent to the chatbot component. After that, the generate_response function will be triggered. This will render the responses sequentially in the chatbot.

Now, the chat web application is ready.

Let's take a look at the final effect. ( There is no way to copy the animation to see the original URL )

That's all for today's sharing. If you think it's good, like it, forward it and arrange it.

Guess you like

Origin blog.csdn.net/javastart/article/details/131994356
Recommended