What happens when TS meets AI?

Follow the WeChat official account of "Big Front-end Private Kitchen" and enter the password [Interview Guide] to receive 107 pages of front-end interview questions for free.

Artificial intelligence is now developing every day, and large language models are becoming more and more powerful. Using AI tools to help you at work will greatly improve work efficiency. Just type a few characters and press the Tab key, and the code will be completed intelligently.

In addition to code completion, we can also let AI help us automate functions and return the required JSON data.

Let's look at an example first:

// index.ts
interface Height {
  meters: number;
  feet: number;
}

interface Mountain {
  name: string;
  height: Height;
}

// @ts-ignore
// @magic
async function getHighestMountain(): Promise<Mountain> {
  // Return the highest mountain
}

(async () => {
  console.log(await getHighestMountain());
})();

In the above code, we define a getHighestMountain asynchronous function to obtain the information of the highest peak in the world, and its return value is the data structure defined by the Mountain interface. There is no specific implementation inside the function, we just describe what the function needs to do through comments.

After compiling and executing the above code, the console will output the following results:

{ name: 'Mount Everest', height: { meters: 8848, feet: 29029 } }

The highest mountain in the world is Mount Everest. It is the main peak of the Himalayas and the highest peak in the world, with an altitude of 8848.86 meters. Isn’t it amazing?

Next, I will reveal the secret of the getHighestMountain function.

In order to understand what is done inside the getHighestMountain asynchronous function, let's take a look at the compiled JS code:

const { fetchCompletion } = require("@jumploops/magic");

// @ts-ignore
// @magic
function getHighestMountain() {
    return __awaiter(this, void 0, void 0, function* () {
        return yield fetchCompletion("{\n  // Return the highest mountain\n}", {
            schema: "{\"type\":\"object\",\"properties\":{\"name\":{\"type\":\"string\"},\"height\":{\"$ref\":\"#/definitions/Height\"}},\"required\":[\"height\",\"name\"],\"definitions\":{\"Height\":{\"type\":\"object\",\"properties\":{\"meters\":{\"type\":\"number\"},\"feet\":{\"type\":\"number\"}},\"required\":[\"feet\",\"meters\"]}},\"$schema\":\"http://json-schema.org/draft-07/schema#\"}"
        });
    });
}

As can be seen from the above code, the fetchCompletion function in the @jumploops/magic library is called inside the getHighestMountain function.

From the parameters of this function, we see the function annotation of the previous TS function. In addition, we also see an object containing the schema attribute. The value of this attribute is the JSON Schema object corresponding to the Mountain interface.

Next, we focus on analyzing the fetchCompletion function in the @jumploops/magic library. This function is defined in the fetchCompletion.ts file, and its internal processing flow is divided into three steps:

  • Tips for assembling the Chat Completions API;

  • Call the Chat Completions API to obtain the response results;

  • Parse the response results and validate the response object using the JSON schema.

// fetchCompletion.ts
export async function fetchCompletion(
  existingFunction: string, 
  { schema }: { schema: any }) {
  let completion;

  // (1)
  const prompt = `
    You are a robotic assistant. Your only language is code. You only respond with valid JSON. Nothing but JSON. 
 For example, if you're planning to return:
      { "list": [ { "name": "Alice" }, { "name": "Bob" }, { "name": "Carol"}] } 
    Instead just return:
      [ { "name": "Alice" }, { "name": "Bob" }, { "name": "Carol"}]
    ...

    Prompt: ${existingFunction.replace('{', '')
     .replace('}', '').replace('//', '').replace('\n', '')}

    JSON Schema: 
    \`\`\`
      ${JSON.stringify(JSON.parse(schema), null, 2)}
    \`\`\`
  `;


  // (2)
  try {
    completion = await openai.createChatCompletion({
      model: process.env.OPENAI_MODEL ?
       process.env.OPENAI_MODEL : 'gpt-3.5-turbo',
      messages: [{ role: 'user', content: prompt }],
    });
  } catch (err) {
    console.error(err);
    return;
  }

  const response = JSON.parse(completion.data.choices[0].message.content);

  // (3)
  if (!validateAPIResponse(response, JSON.parse(schema))) {
    throw new Error("Invalid JSON response from LLM");
  }

  return JSON.parse(completion.data.choices[0].message.content);
}

In Prompt, we set up a role for the AI ​​and prepared some examples for it to guide it in returning valid JSON format.

Call the Chat Completions API to obtain the response results, and directly use the createChatCompletion API provided by the openai library.

After parsing the response result, the validateAPIResponse function will be called to verify the response object. The implementation of this function is also relatively simple. The ajv library is used internally to implement object verification based on JSON Schema.

export function validateAPIResponse(
  apiResponse: any, schema: object): boolean {
  const ajvInstance = new Ajv();
  ajvFormats(ajvInstance);
  const validate = ajvInstance.compile(schema);
  const isValid = validate(apiResponse);

  if (!isValid) {
    console.log("Validation errors:", validate.errors);
  }

  return isValid;
}

Next we want to analyze how to compile TS code into JS code that calls the fetchCompletion function.

The ttypescript library is used internally by @jumploops/magic and allows us to configure custom converters in the tsconfig.json file.

Inside the transformer, there is an API provided by typescript, which is used to parse and operate AST and generate the desired code. The main processing flow inside the transformer can also be divided into 3 steps:

  • Scan the source code of AI functions containing // @magicannotation;

  • Generate the corresponding JSON Schema object according to the return value type of the AI ​​function;

  • Extract function annotations from the AI ​​function body and generate code that calls the fetchCompletion function.

The focus of this article is not on how to parse and manipulate AST objects generated by the TypeScript compiler. If you are interested, you can read the transformer.ts file in the @jumploops/magic project. If you want to experience the AI ​​function for yourself, you can refer to the configuration of package.json and tsconfig.json in the examples in this article.

package.json

{
  "name": "magic",
  "scripts": {
    "start": "ttsc && cross-env OPENAI_API_KEY=sk-*** node src/index.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "@jumploops/magic": "^0.0.6",
    "cross-env": "^7.0.3",
    "ts-patch": "^3.0.0",
    "ttypescript": "^1.5.15",
    "typescript": "4.8.2"
  }
}

tsconfig.json file

{
  "compilerOptions": {
    "target": "es2016",
    "module": "commonjs",
    "esModuleInterop": true,
    "allowSyntheticDefaultImports": true,
    "strict": true,
    "skipLibCheck": true,
    "plugins": [{ "transform": "@jumploops/magic" }]
  },
  "include": ["src/**/*.ts"],
  "exclude": [ "node_modules"],
}

Note that the chat completion API does not always return valid JSON objects in the format we expect, so you will need to add appropriate exception handling logic in practice.

Currently, the @jumploops/magic library only provides simple examples and does not yet support setting function parameters. For this part, you can read the documentation on AI Functions in the Marvin library.

If the large language model can controllably output structured data according to our requirements. Then we can do a lot of things.

Currently, many low-code platforms or RPA (Robotic Process Automation) platforms can obtain corresponding JSON Schema objects.

With @jumploops/magic's solutions, we can make low-code or RPA platforms smarter. For example, quickly create form pages or post various tasks in natural language.

Finally, let's summarize the work behind the @jumploops/magic library, which uses a TypeScript converter to get the return type of a function, convert the type to a JSON Schema object, and then replace the body of the function with the source code containing the // @magic annotated function, Then call the chat completion API and validate the response against the JSON schema.

This is the end of today’s article. I hope it will be helpful to you.

 

 

Guess you like

Origin blog.csdn.net/weixin_41692221/article/details/131434994