Use Laf to launch the Art Lion AI painting applet within a week

"Art Lion AI Painting" (hereinafter referred to as "Art Lion") is an attempt by our small team. It is positioned as a picture generation tool that everyone can use, full of creativity, and able to understand Chinese and Chinese culture.

After refining the image model and demonstrating the core issues, we began to build an MVP (minimum viable product). Building an MVP requires:

  • Fast implementation, short development cycle
  • The model is light and the products are focused
  • Low cost, only investing less manpower and material resources

These goals are no small challenge for us. Thanks to the use of Laf , it only took a week from development to the launch of the first version ; the mini program has streamlined functions and clear goals ; the main service cost (Hangzhou + Singapore) is within 100 yuan/month (including room for optimization).

The following will be combined with the whole process of "Art Lion" MVP construction to share with you the thoughts on using the Laf platform (technology).

Original link: https://forum.laf.run/d/984

Technology selection process

Function as a service is our current thinking and understanding of the Laf paradigm . This is also the main factor why "Art Lion" chose Laf as the server.

I came into contact with the "similar" BaaS platform LeanCloud in 2015 and practiced it twice. The first time was to maintain the content of the crawled Little Red Book, and the second time was to manage chemical reagent numbers for a friend from the Chinese Academy of Sciences.

LeanCloud prefers the database-as-a-service paradigm. Adding, deleting, modifying, and querying data, and providing rich data-based query and aggregation computing methods are the core of this paradigm.

There is no good or bad paradigm, it depends on the problem scenario, complexity and scale that needs to be solved . For database-centric systems that include simple authentication, such as CRM or even ERP, LeanCloud can adapt and match well.

We have also tried and learned about Dingou Cloud (now offline), which focuses on real-time performance , and the light service (now offline) that was online for a while and was taken offline by ByteDance . Also tried following the tutorial to use Firebase .

In the actual usage scenarios of "Art Lion" or other AI applications, database operation is only a part. Complex content generation logic, multi-platform resource scheduling, and relatively intensive calculations are the focus of such scenarios .

Although the applet itself has certain computing power when running in the client, it would be better to regard it as an enhanced BS structure for versatility. We need to separate out a large amount of generation logic and calculation processing flow and put it on the server. Multi-platform collaboration is completed; at the same time, in order to reduce the size of the distribution package, we will also be extra cautious when it is necessary to include other SDK files .

Therefore, database as a service cannot solve the problems we are facing well, and may not be the preferred paradigm for AICG applications.

Tencent Cloud Function (Serverless)

Next, we tried to use Tencent Cloud functions, which met part of our needs.

However, for applications with relatively intensive single calculations, a large number of resource calls, and unstable access volume. Cold start is terrible . After the service is cooled down, it takes a long time to be pulled up again. When the concurrency and access volume are not large, cloud functions are frequently cooled down , which is more harmful to the user experience and more fatal. This may be a headache for every independent developer from 0 to 1.

For storage buckets and databases, Tencent Cloud functions have not been deeply integrated. Configuring cloud databases and storage buckets separately will increase unnecessary time and expense in the MVP stage.

WeChat cloud development

WeChat cloud development can be activated with one click through the WeChat developer tools. Before "Art Lion" migrated to Laf, we made a lot of attempts in WeChat cloud development.

Abandoning the use of WeChat cloud development is mainly due to the following reasons:

  1. The version of the basic library Node is too low and does not support third-party packages well;
  2. The IDE development experience is not good. Writing code is like watching a PPT, and local debugging is not convenient and intuitive enough;
  3. The billing system is weird. Although we are old users of Tencent Cloud, the mini program cannot open services in the original account. We must open a new account in the mini program dimension. Payment and management maintenance are inconvenient;
  4. Poor support for cross-region and cross-service;
  5. Cold start delays are still noticeable.

Quickly obtaining openid through the built-in authentication mechanism is the advantage claimed by WeChat cloud development.

After the "Art Lion" product is migrated to Laf, combined with community tutorials, these functions can be implemented within "3 minutes".

Write a WeChat applet from 0 to connect with Laf cloud development to obtain user openid - Sealos Developer Community

Supabase、Vercel、Cloudflare

Verce and Cloudflare are two platforms that I particularly like, and I will mention them here. The main reason is that it cannot be integrated into the domestic network environment.

In the end we chose Laf. In the process of using Laf, we obtain a usage experience and development efficiency that is almost equivalent to that of Woker and EdgeFunction.

Why choose Laf?

functions as a service

What attracts us most about Laf is the way to build applications through cloud functions .

To a certain extent, function-as-a-service can accommodate (include) the paradigm of database-as-a-service .

Traditional server-side development uses Routerand Controllerto organize code, which is convenient for management and maintenance. However, from another perspective, it will reduce the development efficiency of MVP products to a certain extent and hinder the implementation of innovation.

"Art Lion" decided to add a daily check-in function half an hour before the first version went online.

Through cloud functions, we focus more on new features and functions, instead of being trapped in thinking about which Router and which Controller should the check-in function be placed under. By defining one task_user_daily_check_in, we quickly implemented this functionality.

Cloud functions calling cloud functions is another feature I like. We have defined several "internal" cloud functions (while turning off all HTTP methods) to serve other cloud functions. For example, obtain the third-party service access tokenand persist it locally. On the one hand, code can be decoupled and reused, on the other hand, security is also greatly improved.

For example, this code, which obtains Baidu cloud services access token, can only be called internally after closing the request method.

import cloud from '@lafjs/cloud'

export default async function (ctx: FunctionContext) {
  const ret = await cloud.fetch({
    url: "https://aip.baidubce.com//oauth/2.0/token?",
    method: "get",
    params: {
      'grant_type': 'client_credentials',
      'client_id': process.env.BAE_CLIENT_ID,
      'client_secret': process.env.BAE_CLIENT_SECRET
    },
  });
  cloud.shared.set('bae_access_tooen', ret.data.access_token)
}

Of course, when there are too many cloud functions, you can only manually adjust the sorting of cloud functions by modifying the function names, and the directory structure will appear messy. If you can aggregate and manage cloud functions according to Tags (virtual folders), for me, It will be clearer and more intuitive.

Technology stack friendly

Laf uses Typescript( Javascript) for development, which belongs to the same technology stack as web client and small program development. It greatly reduces the difficulty of full-stack development , and most of the code can be reused at both ends .

On the premise of avoiding the discussion of "who is the best programming language in the world", and only discussing from the aspects of team, cost, and development efficiency, a consistent technology stack is a good choice for MVP development.

Good third-party support

Laf has NPMgood support for third-party packages. I have tried to use momentthird-party libraries such as Qiniu to process time data, Qiniu and other third-party services to process image resources, and a large number of commonly used third-party libraries have been built in.

Like a Lego puzzle , cloud functions can be continuously integrated and enriched from different dimensions. Projects with a large number of third-party dependencies have lower migration costs.

Actual testing, downloading various third-party NPM packages in seconds, accurate version control, no "magic" required.

Out-of-the-box database

Laf has Mongodbgood built-in support for , providing a database out of the box. Sufficient and useful .

In addition to providing a large number of encapsulated query and aggregation methods, Laf also provides Mongointerfaces for native interaction.

For example: create indexes by interacting with native databases.

import cloud from "@lafjs/cloud";
const db = cloud.database()

export async function main(ctx: FunctionContext) {
  let res = await cloud.mongo.db.collection('users').createIndex({ "openid": 1})
  const resIndexs = await cloud.mongo.db.collection("users").indexes()
  return resIndexs
};

Convenient and intuitive online debugging

In Laf, most debugging work can be completed through the console and logs. More complex operations can be completed through the built-in minimalist Postman (interface debugging).

This minimalist and capable style runs through Laf's functions (including the database design mentioned above) and UI, which is another important reason for our fancy Laf.

In actual use, except for the "storage" function, all other functional areas and panels can be used frequently by us. Focus on goals, focus on efficiency, and low redundancy .

Deployed in Hangzhou and Singapore

During the development process of "Art Lion", we purchased and deployed Laf in laf.run (Hangzhou) and laf.dev (Singapore) respectively. The integration of multiple services and cross-regional development is a wonderful experience.

In the process of image generation, "Art Lion" used Tencent Cloud's GPU in Singapore. The storage and optimization of the images were completed in China, and Qiniu finally provided persistence and CDN services.

Laf has smooth communication between Hangzhou and Singapore , high availability, and can be perfectly integrated with other third-party platforms.

One-click elastic expansion and contraction

SealosThanks to projects such as the same school and Lafdeep cultivation in cloud computing, Laf can configure application specifications and elastic scaling with one click.

Although countless products have disappeared silently, and unless they are DDOSed, it is difficult for many products to trigger elastic scaling conditions (such as "Art Lion"), but there is always a dream.

High-level development in a short period of time requires professional operation and maintenance and rapid horizontal expansion. For a small team, it is difficult to achieve both technically and financially. Laf has solved this worry very well and provided us with a great sense of security. I will continue to add to this aspect in the following sections.

open source, open

Laf, unlike other BaaS I've used, is an open source project.

I had the honor to have a conversation with @maslow before. Boss Ma is a profound and far-sighted leader.

If a project belongs to only one company, then the fate of the project will be roughly tied to the fate of that company. And a project belongs to the community, even if the company disappears, the project can still develop and improve.

The decision to open source gives Laf a longer life cycle and reliability. Users no longer have to worry about things like "light services" suddenly going offline, and can practice and use the platform and technology with more confidence.

Also, I wouldn't try to deploy Laf's services unless absolutely necessary. I have a bunch of reasons of my own:

  1. Reliability and stability can save more costs. Self-deployment is equivalent to the reliability of the service itself. The professionalism and operation and maintenance capabilities of many people are not enough to complete this work;
  2. Project maintainers understand the project better and can solve unexpected problems better and faster;
  3. Some features of Laf rely on clusters, and the cost of purchasing and maintaining clusters will be higher;
  4. Paying money and contributing code to open source projects is a good way to encourage and support developers.

Good user operation and charging

In the process of using Laf, I encountered a failure that could not create an index for the document. After @白夜 helped contact @maslow, my problem was quickly solved. Most questions can be answered and code snippets found in the documentation and community .

We have reported that we would like to have a "recycle bin" function. After the iterative update of Laf, the recycle bin arrived as expected. As a user, it is really a joy to have your ideas listened to, respected and implemented.

I lacked understanding of Laf's configuration before and adjusted the configuration that was too high in the background. Boss Ma will tell me the actual resource consumption and recommended adjustment values. Every communication can make us feel the charm of the technical master and his heart for the sake of users.

In total, it is cheap and good, much more cost-effective than a medium-configuration cloud server. At the same time, I don’t have to worry about possible traffic peaks. I believe that Laf’s professional operation and maintenance will definitely solve the problem that “Art Lion” will encounter in the future. The problem.

Extra chapter

We are big fans of Midjourney and SD. The name "Art Lion" was suggested by GPT-4, and the mini program logo and some image resources were also generated by "Art Lion".

In the process of using Mid and SD, we found two problems:

  1. Accurate prompts need to be written in English, and specific English vocabulary needs to be memorized and accumulated (such as style, lens, perspective, etc.);
  2. Prompts need to be arranged in a structured manner. The order of keywords will also affect the quality of the picture. At the same time, the creator needs to have certain professional qualities in art and photography.

There are very few people around who can use Mid or SD well; one is a language issue and the other is a professional issue.

We use the Dalle2 model as the starting point for training. Yes, it is Dalle2 with terrible pictures. However, Dalle is the one that understands natural language best among the current painting models .

Prompts are text, and describewhat comes out of pictures Tagis also text. Language and text are the most important medium for information communication, and large language models are the cornerstone of this AI wave.

Over more than three months, we have improved "Art Lion"'s understanding of natural language, especially Chinese, to make it more flexible and understand you better. Although there are still many imperfections, the process still brought us many surprises.

Our team will continue to use Laf in future practices. It is a joy to use Laf. I hope more and more people can feel the charm of Laf. I also hope Laf will be better tomorrow.

Guess you like

Origin blog.csdn.net/alex_yangchuansheng/article/details/132601744