ChatGPT-Free Nanny Level User Guide

 

I. Preface

With the support of everyone, our free chatgpt project website: ChatGPT-Free icon-default.png?t=N3I4https://chatgpt.cytsee.com has also been running stably (basically) for a month, and I often use it to write code, polish articles and even As my technical consultant, it can be said that the experience is full.

But recently, when I saw many friends around me use our small broken website, they really don’t know how to ask questions. Today, I will show you how to ask questions, how to ask effective questions, and create a good prompt. By the way, let’s talk about chatgpt ( mechanics and common errors in the context of this site only).

II. Basic knowledge of ChatGPT

What is GPT?

Before the introduction, I have to mention NLP (Natural Language Processing). NLP studies how to make computers understand human language, that is, converting human natural language into instructions that computers can read. GPT is the most advanced in the field of NLP research. One of the fruits of success.

Back in 2018, NLP was still in the deep learning of word2vec and customized deep models for different tasks. Although pre-trained models such as ELMo had appeared, their influence was far from enough. In this context, GPT's first-generation pre-trained language model emerged.

The original title of GPT is Improving Language Understanding by Generative Pre-Training, which uses a general-purpose pre-training model to improve language understanding (Generative Pre-Training can also be understood as "generative pre-training"). The name GPT comes from Generative Pre-Training. About GPT's technological evolution history, model structure, and training methods, I can write 100 articles. After all, you all support my free project with a lot of money. Model.

How are the answers generated?

As mentioned above, GPT is a text generation model that generates output characters one by one according to the input characters. These outputs are generated by the GPT model scanning a large amount of text "learned" during training. Each time it is generated is A token (may be 1-2 Chinese characters or N words), according to human understanding, it can be considered that GPT is based on your question and generates text with the best probability according to probability statistics, that is to say, it Didn't understand anything at all, but he got it right, and that's actually it. However, even opanai itself is currently unable to explain such mutations produced by very large neural networks, so we cannot define it in the way humans understand it, since it can generate the correct text according to the question, it is enough.

How does AI perceive context?

AI perceives context based on historical chat records. Every conversation you have with AI will have as many historical chat records as possible, so don't listen to what is said on the Internet "The more you talk to it, the more it understands what you want, and the more it answers you. Necessary", if you talk too much, you may lose the context, or you may report an error exceeding the length limit. Because whether it is the official website or the official API, it is based on the "historical chat records" of each request to perceive the context, and capable friends can capture the packets and view them by themselves.

Since gpt-3.5/gpt-3.5-turbo only supports the generation of a maximum of about 4000 tokens (1 token ≈ 1 Chinese character, note that it is approximately equal), so as the chat deepens, it is impossible to attach all the chat history every time ( Not enough money to burn). Careful people may have noticed that our site compresses the context based on the memory mode, that is, when the total length of the conversation exceeds a certain number of words, the chat history will be automatically summarized into short text, and then sent out with your next question , so as to ensure that the continuous dialogue can be maintained without losing the main context. Of course, some details will inevitably be lost. This cannot be changed. Even gpt-4 will still lose the context after exceeding the maximum token limit .

What is the principle of use?

To sum up, if you use chatgpt as a tool, you should put forward an accurate and efficient prompt every time to get the best answer, and in this way, you can also keep the text of the short-term continuous dialogue without deviation.

If you only want to chat with the AI ​​character, then you can gradually tune him, but a good prompt is still very important.

On the website, you can choose a preset role to initiate a conversation, and it also supports custom roles, which is relatively simple, so I won’t talk about it here. Most of the preset roles are realized by setting a fixed context. Competent friends should find that the context of some roles is to set the user (that is, you) to make a role-playing request to AI, and some of them are through the system (system ) to make a request to AI. After many times of actual measurement, there is almost no difference in the effect of these two methods. Friends who are interested can study by themselves, and welcome to communicate with me on the official account. At present, this preset is beyond a certain limit. After the word count, it will still be compressed, and the details will be gradually lost. According to your feedback, the preservation of the default context is being studied in the follow-up improvement plan.

Why is it returning an error?

The common reason for the error is that you have selected a model that is not currently supported (such as gpt-4), or the request rate limit (10,000 times/minute) has been exceeded. When an error occurs, a json data is usually returned, which contains a detailed error description, although it is English, but I believe that anyone who is willing to read Tongzi can understand it. After all, you are all elites who are at the forefront of the times. In fact, Baidu translation is also fine, but my suggestion is: learn English.

Some packages may encounter messages such as "something went wrong". There are several possibilities:

1. There is a problem with your network, please check the gateway settings, you can change the mobile phone hotspot network test.

2. The domain name of the interface is walled. Generally, I will find out and replace it in time. Of course, it will take some time. If you can’t get in for a long time, please contact me. You can also try to clear the browser cache and try again.

Other errors are extremely rare, welcome feedback, and welcome your valuable suggestions! I will reply to official account messages and emails (although not necessarily in time)

III. How to Ask Good Questions

1. Avoid Ambiguous Words

When asking questions, you should try to avoid using vague words such as "some", "many", "sometimes", etc. These words could lead the AI ​​to misunderstand the scope and purpose of the question.

2. Use clear and concise language

When asking questions, you should use concise and clear language to make it easier for AI to understand the meaning of the question. Can ask questions using simple sentences and common vocabulary.

3. Try to avoid ambiguity and semantic confusion

When asking questions, you should try to avoid ambiguity and semantic confusion. If the question is not clear enough, it may cause the AI ​​to give the wrong answer. Ambiguity and semantic confusion can be avoided by simplifying questions and adding keywords.

4. Deepen model understanding through examples

When asking a question, you can help the AI ​​understand it better by providing concrete examples. For example, when asking "How to make pizza?", you can provide specific methods and steps to make it.

IV. How to make AI better understand the problem

1. Avoid Abbreviations and Slang

Abbreviations and slang should be avoided as much as possible when asking questions. These words may make it impossible for AI to understand the problem.

2. Avoid Uncommon Words

When asking questions, try to use common vocabulary and expressions. This makes it easier for the AI ​​to understand the problem.

3. For complex questions, you can ask questions step by step

For some more complex questions, you can ask questions step by step. This makes it easier for the AI ​​to understand questions and give more accurate answers.

4. Pay attention to grammar and punctuation

When asking questions, you should pay attention to the use of grammar and punctuation. This makes it easier for the AI ​​to understand questions and give more accurate answers.

V. How to get better answers

1. Identify the scope and purpose of the problem

Before asking a question, the scope and purpose of the question should be confirmed. This makes it easier for the AI ​​to understand questions and give more accurate answers.

2. Ask specific questions

When asking questions, ask questions in a targeted manner. This makes it easier for the AI ​​to understand questions and give more accurate answers.

3. Avoid Asking Vague or Irrelevant Questions

When asking questions, you should avoid asking vague or irrelevant questions. This makes it easier for the AI ​​to understand questions and give more accurate answers.

4. Further questioning and clarification of answers

After an answer is obtained, the answer can be further questioned and clarified. This allows the AI ​​to better understand questions and give more accurate answers.

5. Combining Domain Knowledge and Background Information

When asking questions, you can combine domain knowledge and background information. This makes it easier for the AI ​​to understand questions and give more accurate answers.

Finally, I wish you all a fortune every day! If you can spend a small amount of money every month to support the operation of the website, I will be very grateful!

Guess you like

Origin blog.csdn.net/weixin_42117463/article/details/130499128