How to write better code with big language models

Generating software code is one of the powerful applications of ChatGPT and other instruction-following large language models (LLMs). Given the right prompts, LLM can generate code that would otherwise take hours to write.

However, an LLM cannot do everything a programmer does. They are unable to break down complex problems, think logically and structurally, and create multi-layered solutions. They can only process one token at a time, predicting the next code snippet that might follow the user prompt and their current output.

Here are four tips that will help you make the most of ChatGPT's impressive coding capabilities while avoiding its pitfalls:

01

If you can't verify it, don't trust ChatGPT

ChatGPT always answers confidently, even if his answer is wrong

A distinguishing feature of LLMs like ChatGPT is their authoritative voice. They always answer confidently, even when their output is nonsensical. On several occasions, ChatGPT answered my questions with convincing but wrong answers.

My rule of thumb with ChatGPT is to only use it on topics that I fully understand and can verify. For example, I wouldn't use it to write an explanation on quantum physics because I don't know enough about the subject. However, ChatGPT helps me write interesting articles on the basics of machine learning as I can fully check and correct its output.

Likewise, when using ChatGPT to generate code, you can only trust it for tasks that you can fully verify. ChatGPT can write code that doesn't work, or worse, code that works but has security issues. I see it as a tool for automating tedious tasks that I take a long time to write or consult documentation pages or online forums like Stack Overflow multiple times. For example, you can ask it to write a sorting algorithm, code to start a web server in Python, write SQL queries against a database schema, or generate Matplotlib's data visualization commands.

02

Iterates one block at a time

Don't expect ChatGPT to successfully write complete programs or complex code blocks for you

LLMs (Large Language Models) tend to struggle in tasks that require reasoning and step-by-step planning. Therefore, don't expect ChatGPT to be able to successfully write complete programs or complex code blocks for you. However, that doesn't mean ChatGPT can't help in complex programming tasks. If you give it a simple task like the one mentioned above, its chances of success are greatly improved.

Split tasks into smaller steps and prompt ChatGPT step by step. A successful approach is to first give ChatGPT a step-by-step program logic overview, allowing it to understand the logic of the program you want to write.

This helps prepare the model for larger tasks. The LLM is then prompted step by step for coding. In short, you infer and ChatGPT does the work. (By the way, this approach of starting with an overview and working your way through tasks is also successful for other tasks, such as writing certain types of articles.)

If you don't have a clear idea about the step-by-step process, you can get help from ChatGPT itself. When starting a coding session, LLM is prompted to generate a series of steps to complete the task. Then correct the overview as needed and start prompting it to generate the code for the first step.

03

Give Feedback to ChatGPT

ChatGPT is very context sensitive and its behavior can change based on chat history

ChatGPT cannot be expected to deliver clean, secure and working code at every step. You make corrections and adjustments as you review its code and enter it into the IDE. In doing so, it is good practice to provide ChatGPT with the corrected code as feedback and, if applicable, an explanation.

One thing to note is that ChatGPT is very context sensitive and its behavior can change based on the chat history. You can take advantage of this, which is why providing feedback and correcting code snippets is very helpful.

For example, you can say, "Here's the code I changed: [insert corrected code]. Please try to make [insert behavior correction] in a later step." This can help steer ChatGPT in the right direction and avoid the Repeat mistakes when answering future prompts. (Similarly, I have had success using this feedback method with ChatGPT in other tasks, including writing articles.)

Sometimes you can use ChatGPT to get feedback on its own code. Try opening a separate chat session where you provide the code generated by ChatGPT and ask it for improvements or corrections. Sometimes, it yields interesting results and new directions to explore.

04

Clean up the context of ChatGPT

Regularly clean up the context of the chat content, which helps to improve the accuracy of the model code

If you're working on a particularly large task, your chat history can become very long, especially if you interact a lot back and forth with ChatGPT. Depending on which model you use, the LLM's contextual memory may be exhausted. The free version of ChatGPT has a memory capacity of 4,000 tokens. (For language tasks, 100 tokens equate to about 75 words. For programming tasks, it's usually much less.)

A successful trick is to periodically clean up the context. To do this, you use a new chat session and in the prompt provide ChatGPT with an overview of the task, the steps you have completed so far, the code you have generated so far, and some general guidelines you would like it to follow. Then tell it to continue from the next step. By cleaning up the clutter from previous interactions with the LLM, you provide a clearer context and improve the accuracy of the code generated by the model.

Guess you like

Origin blog.csdn.net/elinkenshujuxian/article/details/131663579