What is Prompt Engineering?

Original Link: Cheese AI Eats Fish

Understanding why large-scale AI models behave the way they do is an art. Even the most accomplished technologists can be baffled by the unexpected capabilities of large language models (LLMs), the fundamental building blocks of AI chatbots like ChatGPT.

So it's no surprise that hint engineering is a hot job in generative AI, but what exactly is this job?

What is instant engineering?

Professional tips engineers study how artificial intelligence works every day. Using carefully crafted prompts and precise verbs and vocabulary, they push chatbots and other types of generative artificial intelligence to their limits, spotting bugs or new problems.

The details of the role vary from organization to organization, but in general, instant engineers work on improving machine-generated output in a repeatable way. In other words, they try to align AI behavior with human intent.

Why Just-In-Time Engineering Isn't Exactly for Techies

While great hint engineers possess a rare combination of discipline and curiosity, when developing good hints they also draw on general skills that are not limited to the field of computer science.

The rise of just-in-time engineering is opening up certain aspects of generative AI development to creatives with more diverse skill sets, much of which has to do with no-code innovation. Andrej Karpathy, former director of artificial intelligence at Tesla, tweeted in January 2023 that "the hottest new programming language is English".

To some extent, good prompt engineers can compensate for the limitations of AI: AI chatbots can be good at grammar and vocabulary but have no first-hand experience of the world, making AI development a multidisciplinary endeavor.

However, some experts have questioned the role's long-term value, since better output can be achieved through clumsy prompting. But there are countless use cases for generative techniques, and the quality bar for AI output will continue to rise. This suggests that just-in-time engineering as a job (or at least a function within a job) isn't going away anytime soon.

5 Non-Tech Tips for Engineering Skills

Anyone who interacts with generative artificial intelligence should be interested in the day-to-day activities of Instant Engineers for two reasons: (1) It illuminates the capabilities and limitations of the technology. (2) It gives people a better understanding of how to use the skills they already have to have better conversations with AI.

Here are five non-technical skills that facilitate the development of AI technologies through the multidisciplinary field of just-in-time engineering.

1. Communication

Like a project manager, teacher, or anyone who regularly walks others through how to successfully complete tasks, a cue engineer needs to be good at giving directions. Most humans need a lot of examples to fully understand instructions, and the same goes for AI.

Edward Tian developed GPTZero, an AI detection tool that helps determine whether a high school essay was written by an AI, and showed examples to a large language model so it could write using different voices.

Of course, Tian is a machine learning engineer with deep technical skills, but anyone who is developing prompts and wants a chatbot written in a specific way can use this approach, whether it's a seasoned professional or a school kid.

2. Subject matter expertise

Many prompt engineers are responsible for tuning chatbots for specific use cases, such as healthcare research.

That's why job postings for engineering jobs that require specific industry expertise are popping up all of a sudden. For example, UK law firm Mishcon de Reya LLP has a vacancy for a GPT Legal Tips Engineer. They are looking for candidates who "have an in-depth knowledge of the practice of law".

Subject matter expertise, whether in healthcare, law, marketing, or woodworking, is useful for crafting strong prompts. The details make the difference, and real-world experience counts a lot when talking to AI.

3. Language

For AI to succeed, it needs to have intent. This is why people who are good at using verbs, words, and tenses to express the overall goal have the ability to improve artificial intelligence performance.

When Anna Bernstein started working at Copy.ai, she found it useful to think of hints as a kind of magic spell: one wrong word can produce very different results than expected. "As a poet, the character ... taps into my obsessive nature through the approach of language. It's a really weird intersection of my literary background and my analytical mind," she told Business Insider.

Instead of using programming language, AI prompts use prose, which means that people should unleash their inner linguistic enthusiasm when developing prompts.

4. Critical Thinking

Generative AI is good at synthesizing large amounts of information, but it can be hallucinating (that's a real technical term). AI illusion occurs when chatbots are poorly trained or designed, or when there is insufficient data. When a chatbot hallucinates, it just spits out false information (in a fairly authoritative, convincing way).

Prompt the engineer to find this weakness, and then train the robot to get better. For example, Riley Goodside, a real-time engineer at artificial intelligence startup Scale AI, got the wrong answer when she asked a chatbot the following question: “Which NFL team won the Super Bowl the year Justin Bieber was born?” Then, He asked the chatbot to lay out a series of step-by-step logical deduction to arrive at an answer. Eventually, it corrected its mistake.

This underscores that having an appropriate level of familiarity with the subject matter is key: it's probably not a good idea for someone to have a chatbot generate something they can't reliably fact-check.

5. Creativity

Trying new things is the definition of creativity and the essence of good instant engineering. Anthropic's job posting says the company is looking for a rapid-fire engineer with qualifications such as a "creative hacker."

Yes, linguistic precision is important, but it also requires some experimentation. The larger the model, the greater the complexity and, in turn, the greater the potential for unexpected but potentially surprising results.

By trying various cues and then refining those instructions based on the results, generative AI users can increase the likelihood of coming up with something truly unique.

 

Guess you like

Origin blog.csdn.net/wwlsm_zql/article/details/131609384