Bookinfo - rewrite Istio classic demo based on CloudWeGo

CloudWeGo Study Group is a study group initiated by the CloudWeGo community. It carries out source code interpretation and learning activities with a period of 30 days to help new members integrate into the community circle, interact with community Committers, and learn several major framework projects of CloudWeGo. At present, the fourth phase of CSG - Interpretation of CloudWeGo business practice cases has been officially launched!

During this event, 4 sessions of live sharing will be arranged, the themes are:

  • Bookinfo - rewrite Istio classic demo based on CloudWeGo
  • Open Payment Platform——Implementation of API Gateway based on CloudWeGo
  • EasyNote——Getting Started with CloudWeGo Ecosystem
  • BookShop——From getting started with e-commerce to getting started with CloudWeGo

This article is the content shared by Hu Wen, a ByteDance infrastructure R&D engineer in the first live broadcast of the fourth phase of CSG .

Replay link: https://meetings.feishu.cn/s/1ipd8ih057fnm?src_type=2

1. Guest introduction

The content shared in this issue is mainly divided into the following four parts:

Use Hertz and Kitex to rewrite the classic Bookinfo project, show engineering design, technology selection, and then show how to use full-link swimming lanes to achieve grayscale publishing and other scenarios from shallow to deep.

  1. Introduction to Engineering Design
  2. Technology Selection Introduction
  3. Full Link Swim Lane Introduction
  4. Proxyless 与 ServiceMesh

Good evening everyone, thank you very much for participating in my sharing. The topic of this sharing is Bookinfo - how to rewrite one of the most classic demo applications of Istio based on the CloudWeGo technology stack. The entire sharing session is divided into 4 chapters:

  • The first chapter will briefly introduce the engineering design of this project;
  • The second chapter introduces our idea of ​​technology selection for this project;
  • The third chapter is the core capability of our project, which is also the core goal of this demo: use the CloudWeGo technology stack to help us demonstrate how to make a full-link swim lane;
  • The fourth chapter will do some extended technical discussions, that is, some thoughts on the Proxyless and ServiceMesh modes.

2. Introduction to engineering design

Project Introduction

First, let me introduce the project. Bookinfo is a classic demo application officially provided by Istio. Its purpose is to demonstrate various features of Istio. Of course, our purpose is actually similar: we hope to use the CloudWeGo technology stack to rewrite this demo, and based on the technology stack provided by CloudWeGo itself, combined with how to do ecology, to demonstrate how to meet the needs of microservice scenarios. Here is also the address of our project, if you are interested, you can click to have a look.

Project address: https://github.com/cloudwego/biz-demo/tree/main/bookinfo

architecture design

The overall architectural design is consistent with that of Bookinfo. From the top to bottom of the figure below, first of all, there will be a layer of control plane on the top. The control plane directly reuses the control plane of Istio, which is Istiod; from left to right, there will be an entrance on the left The gateway, on the right is the split of a microservice of the main body of our Bookinfo. First of all, the service on the left is Producepage, which is responsible for receiving external HP request traffic, and then performing corresponding page rendering and aggregation of corresponding interfaces. Later, several services will be split, one is Reviews, and Reviews will call the Ratings service to obtain the ratings of book reviews, and then split the Details service, which is to display the detailed information of a number.

In the figure, we also drew how the traffic is going according to the schematic diagram of the traffic, which is divided into two situations, because Bookinfo is mainly a full-link swimming lane under the demo, so we will have two types of traffic, one is with a colored mark For traffic, we will come to a purple one with a traffic dyed logo. First, it passes through our ingress gateway, where it will uniformly perform a layer of traffic coloring at the ingress gateway. All subsequent requests will carry the corresponding baggage. For baggage, we are currently using the changing capability of OpenTelementry to help us automatically transparently transmit. Downstream services will integrate Kitex's xDS SDK, and when the client is routing, it will accurately route the traffic to a specific service instance version according to the routing rules of the swim lane. For example, the purple one, it will accurately hit Reviews v2, and it will continue to hit Ratings v2. If it does not have a colored flag, it is a normal request, and our lane rule definition is normal, it will do a round-robin poll between the v1 and v3 versions, and it will also call our Details. On the Ratings side, let it request the v1 version of the service regularly. You can see that each Productpage or Reviews integrates the xDS library, and it will establish a long link with the xDS on the control plane Istiod, which can be accessed on the Istio console. To configure the rules of routing service governance, and dynamically send them to service instances through the xDS channel.

Engineering architecture design

Just introduced the general architecture design of the whole project, this is the directory structure design of the whole project. The directory structure is recommended by referring to the popular go standard project layout project.

First of all, from top to bottom, a makefile will be placed. The makefile is an entry point for the entire project construction. It can be understood that the corresponding construction commands are all packaged in a unified way, and the corresponding construction can be executed through some packaging instructions.

The build directory mainly contains some configuration files for the project’s corresponding image construction, such as Dockerfile

cmd can be understood as the main entrance of the whole project, and each service uses Cobra to split it into different subcommands;

The conf directory is to store some configuration files of the entire project

The Idl directory will store the definition of Thrift of Kitex. Internal is the package related to business logic. We unified it to not expose the package by default, so we unified it into internal. There will also be some simple layers in the internal: the handler layer is used to put some HB handlers related to Hertz in a unified way; the server is the initialization of the corresponding services of our four services, and some logic services related to its service startup . The encapsulation of some of our business logic will be unified in the implementation of some related business logic of our four microservices, and will be placed here in a unified manner. kitex_gen is some code that is automatically generated based on IDL

Manifest is a configuration file related to project deployment. It is recommended to use Helm Charts for normal deployment. There is also a Bookinfo helm charts here, you can go directly to helm install to deploy the application

pkg can be understood as some reusable packages that are highly encapsulated, highly reusable externally, and do not have too much coupling with business logic. We will put them in pkg uniformly

OK, this is the approximate engineering structure design of Bookinfo

Makefile specification design

When it comes to engineering, Makefine is definitely inseparable. As we just mentioned, it is an instruction package for the construction of the entire project. In fact, it can be understood that it is also a set of specifications that we abstracted internally. We will require Makefine of each project to contain some necessary elements, such as must be able to support code inspection, also support to execute some unit tests, can support binary builds, cross-platform compiled binary builds, and our The construction of the image of the container, and after the image is built, can support us to push to the remote mirror warehouse, and also allow some locally built products to be cleaned up more conveniently.

The above mentioned is a "strong need", that is, it needs to include these. Next, you can see if you want to add an end-to-end test according to your own needs.

This is the standard design of Makefine, and I will briefly share it with you. Next is a concrete example of Makefine. As you can see, I have one based on lint for code checking, another for unit testing, another for building localized binary products, and another for building a containerized image.

The other part is about engineering, because now is a cloud-native era, and everyone runs their own applications in a containerized way, so the Dockerfile of the container actually has corresponding specification requirements, which will be required in accordance with different Stage to do the corresponding Dockerfile to make a split.

  1. First, a multi-stage build. First of all, it can be seen that the above is actually a stage of compilation. The stage of compilation can rely on the basic image of golang to execute the binary build we just encapsulated in the makefile. Its task is actually to compile our project into an executable binary.

  1. Second, the principle of mirror image simplification. Regarding the runtime image, it will be very lightweight. It will not be based on a base image that contains Golang or some basic dependencies. It will only hope to rely on some lightweight and small base images. We It will copy the built product of the first step into the image, and prepare some configuration directories. So it just needs to put some executable binaries and a configuration directory. This is a canonical design for multi-stage builds. Just mentioned the runtime image, we try our best to make it as streamlined and lightweight as possible in the production environment. So the things put in the basic image should be as few as possible.
  2. Third, image security. Usually the business does not require privileged execution, so we require users to use it, so the command version is switched to a non-root user to avoid exposing the root user in the production environment.

3. Introduction of technology selection

In terms of technology selection, I briefly listed the technology stack of the Bookinfo project.

  1. First, Kitex and Hertz. The technology stack is the framework of our services from top to bottom. Productpage provides HTTP services to the outside world. Hertz is used to write a server side, and it also integrates kitex client to call the services downstream of the corresponding link. For example, when he calls Reviews and Details, he will use the kitex client to call other services. And because these are some internal microservices, it does not need to expose the outside of the interface, so they are uniformly packaged into RPC services using kitex.

  1. Second, Istio. As mentioned earlier, our project is to demonstrate how to use proxyless to make full-link swim lanes under the service mesh, so we will also rely on the control plane of Istio. Its responsibility is mainly to serve as the control plane of the service grid, interact with the xDS module on the data plane, and be responsible for dynamically delivering some xDS configurations.
  2. Third, Wire. Our project also relies on Google's wire to do the corresponding dependency injection.
  3. Fourth, OpenTelementry. Because we want to make full-link swimming lanes, firstly, Tracing may be a strong dependence on the context transparent transmission capability of Tracing, which helps us to transparently transmit the coloring marks of the swimming lanes, and also pass the full-link tracing, Both metrics and logs are integrated and demonstrated.
  4. Fifth, Kitex-xDS. It is a relatively core dependency of the project, which allows Kitex to directly connect to the gridless system in a Proxyless manner.
  5. Finally, I wrote a simple UI layer with react and arco-design. The above is a brief introduction to the technology stack

Frame Selection - Hertz, Kitex

The frame selection has just been mentioned, so I won’t repeat it too much. First of all, Productpage uses Hertz server, and others use Kitex.

Dependency Injection (Google Wire)

I just talked about that I will rely on a Google wire for dependency injection. Google's wire, I believe everyone is familiar with it, like the dependency injection of go, there are two main types: one is based on reflection at runtime, and the other is based on static code analysis and The mechanism of code generation is to generate the corresponding injection code in advance.

We chose to use Google wire, and we can see a simple handler. The handler has two dependencies, one is the client of the review, and the client of the detail can see that the handler does not need to care about how these two clients are initialized, it only needs to take these two clients as one of its dependencies, and the statement is here side.

The following is the provider of the review client itself, and you don’t need to care about who will use it. You only need to initialize it and how to provide external services and provide capabilities. It can be seen that the responsibilities of the two can be separated.

The following figure is an example of the provider of the review client itself

The injection starts here, using wire.Build to do it, and the corresponding provide and dependency declarations will be uniformly declared here.

When we execute go generator, it will automatically generate the corresponding dependency injection code for us. It can be seen that the dependency injection code will be generated for us, such as initializing the review client, initializing the detail client, and passing the client as its parameter. After there are no dependencies normally, these codes may need us to write by hand repeatedly. The Bookinfo project is relatively small, and it may be okay. If it is a relatively large project, if these codes are easy to write by hand, there will be problems in order, or it is a repetitive job, which is not so simple or elegant.

Observable - OpenTelemetry

The other piece is observable. Our selection is to use OpenTelementry. Briefly, OpenTelementry is a set of observable open source standard protocols, which provides corresponding specific and unified definition of corresponding API, including implementing a set of SDK based on its own specific and API definition, and also includes A set of self-implemented data collectors, for example, can help us collect some data from the cloud or infrastructure. These data mainly include Metrics, Traces, and Logs, which uniformly converge them into their own protocol specification format. This is a brief introduction to OpenTelementry.

On the other hand, our Kitex and Hertz have natively integrated OpenTelementry. There are two repo here. If you are interested, you can click to see the specific implementation. With these libraries, when we use Kitex or Hertz, we can easily integrate OpenTelementry directly in our business.

The integrated code is briefly listed below, which is actually relatively simple. One is to initialize the corresponding OT provider and inject the corresponding tracing Suite into the server. This will actually help us automatically remove some integrations of Tracing, Metrics, and Logs.

This is observable, simply demonstrate the effect after integration.

One thing can be mentioned. Let’s talk about Kitex and Hertz separately. The implementation of the underlying Tracer is more elegant, and it will help us expose some internal details of the framework in the form of stats. In this way, when our upper layer uses OT to integrate, we can easily use these events as a span event to help users expose it, and make corresponding presentations on the event, such as the establishment and termination of a client connection, As well as its read and write events, we can observe them in the link trace, and generate some indicator topology or call chain.

Service Governance - Introduction to xDS

Our selection of service governance is based on the Istio service mesh mechanism. First of all, I want to talk about the service grid. I need to introduce the concept of xDS to you. The simple understanding of xDS is the general term for a group of discovery services, so it is called "x". Among them, LDS, RDS, CDS, and EDS are service discovery for automatic configuration of various resources. They can all be deployed on our control plane, Istio, and our data plane in real time.

With the xDS mechanism or this unified abstraction, things are easier to do. If Kitex wants to connect, it can have a way of thinking: directly interact with Istiod, based on the xDS protocol, to obtain the governance rule configuration we want in real time.

Service Governance - Kitex xDS

Kitex has now natively supported the xDS API, so we can directly enable the corresponding xDS module in our code, and allow the service to run in Proxyless mode, and then be managed by the service grid uniformly. For specific internal implementation details, there are some solutions that you can take a look at if you are interested. In terms of usage, you can see code snippets. We first need to initialize our xDS Manager, which is responsible for storing our corresponding xDS configuration, some configuration information of real-time watch, and it will have a built-in xDS client, which is responsible for real-time interaction with our control plane Istiod. Seeing that our kitex client will integrate an xDS routing module, this is for us to build a full-link swimming lane later, and lays a foreshadowing: how can we implement a traffic routing based on Proxyless.

Of course, the OpenTelementry capability must be enabled at the same time.

Based on xDS, Kitex can be uniformly managed by the service grid, and the benefits of unified management can be reflected: we can use Istio's native API to define governance rules. For example, according to the conventional use of Istio, each instance will be marked with a version identifier, and a destination rule will be defined to make a group for each of our instances. For example, in the v1 version, the pods of the v1 version are put in the v1 pool uniformly, and the v2 version is put in the v2 pool in a unified way. Correspondingly, the v3 will also be put in another pool. It is a simple service instance. grouping.

Service Governance - Define traffic routing rules

After you have a group, you can define the traffic routing rules for the group. There are two routing rules. The first one is that there is a corresponding Baggage in the header, and the traffic will be accurately sent to the pool of the second version of Ratings. If not, it will be sent to all versions of Ratings for polling by default. This means that the routing rules for corresponding traffic can be defined based on the native API of Istio, which is the benefit of docking with xDS.

4. Introduction to the full link swimming lane

After finishing the technical selection, we also brought up a more important topic for us today: the full-link swimming lane. The main purpose of our demo is to demonstrate how to realize the full link swimming lane.

First of all, the lane design is simply divided into two lanes. In fact, how many lanes are divided depends on the lane routing rules. What we demonstrate is actually the reference lane and the branch lane. It can be seen that the reference swimlane is product v1, it will do a round-robin between v1 and v3, and it will reach v1. If the traffic routing of the corresponding baggage v1 version is carried, it will be accurately routed to the v2 version, and the corresponding entered into the Ratings v2 version.

Swim Lane Design - Traditional Traffic Routing Approach

When we receive the requirement for this swimlane, there are several ways to realize it. The traditional way is the easiest to think of. In fact, Istio natively supports matching according to some headers and can define routing rules.

Some businesses are identified directly by business attributes, that is, I will give you a business attribute identifier, and you can help me configure a routing rule. For example, according to the rule of uid=100, let us configure a corresponding routing rule.

Occasionally, this is no problem once in a while, but if you come to the next business side, you need to configure a uid=101, so the limitations will gradually be revealed, and it is not universal enough. Because it uses a certain key of the specific attribute of the business as the matching rule for traffic routing, and the key also needs to be in the SDK or business code, which is correspondingly intrusive. It must be helped to transparently transmit the key of the business attribute. header inside the header. For example, when requesting service a and requesting service b, uid=100 needs to be included in the header, so that Istio's routing capability can help you identify the header for routing. On the other hand, for example, if the business side is based on the uid=100 rule today and changes it the next day, this routing rule is easy to change frequently, and the VS configured for each service may need to be changed frequently. Moreover, the maintenance rules may explode with the number of services, and the rules may be very bloated. This is the disadvantage of implementing it in the traditional way provided by Istio by default.

Lane Design - Flow Coloring

The concept of uniform coloring will be considered here. The so-called coloring, such as the uid=100 we just mentioned, only needs to converge the changed version to the ingress gateway. The downstream does not need to know what the header key of the specific business is. It only needs to know whether you have Baggage or not. The specific Baggage configuration does not need to be displayed in the business code. For example, I pass the uid today and a certain key in the cookie tomorrow. The business side does not need to follow you to modify it, but only needs to access the Baggage SDK, which can automatically transparently transmit Baggage.

All the changed parts are converged on the operation and maintenance side, as long as the coloring rules and routing rules are configured. There is no need to make any changes to the business code. The dyeing gateway layer usually has two dyeing methods. One is to dye according to conditions. Certain cookies are used for matching and marked with coloring. There is another way, to make proportional requests according to a certain ratio, and to make some dyed logos. For example, according to 10% of the traffic, we simply do 10% of the traffic dyeing. Based on this dyeing mark, compared with the traditional method, there are actually many advantages, that is, the business is decoupled from the business, and the business code is decoupled. As for how to implement Baggage, and how to transparently transmit Baggage itself, it is also a problem that needs to be considered and solved by the infrastructure team.

Swim Lane Design - Colored Logo Full Link Transparent Transmission (Baggage)

Our model selection idea is to use the full link mechanism to make Baggage. Baggage is actually a concept hatched in the whole link. It is designed to allow the whole link to transparently transmit some custom KV attributes of business attributes. The identifier of the business, such as AcountID. So what the business side needs to do is to integrate our OpenTelementry tracing.

Lane Design - Lane Effect (Basic Lane)

Next is a simple display of Bookinfo. For example, if the ingress traffic does not have a uid request, the reference swimlane will be found by default. The baseline swimlane is the implementation of two different versions of the service instance, one that returns only new book reviews, and one that returns no new book reviews. In the other way, its entry request has already carried the corresponding uid identifier, and it will be accurately routed to the branch swimlane. A branch swimlane is a scoring service that returns 5.

5. Proxyless and ServiceMesh

The above is a brief introduction to everyone. The project extends to the full-link scenario. It can be extended again, thinking about the relationship between ServiceMesh and Proxyless, and how Kitex xDS realizes the full-link traffic routing capability. You can also briefly introduce the internal implementation details .

First of all, Kitex Proxyless mode, you can see that the overall architecture is also divided into control plane and data plane. It can be seen that there is no traditional Detail envoy or pilot agent, sidecar. It is a clean business application, which will integrate the corresponding modules of xDS. The corresponding module of xDS has an xDS client, and what it does is to communicate with Istiod. Based on the ADS protocol, it is possible to obtain changes in governance rules in real time. For example, the traffic routing rules just configured are actually governance rules. In the future, dynamic circuit breakers and timeouts may be supported. These can also be sensed in real time and delivered in real time. We will sync it to our xDS resource manager to maintain real-time governance rules.

On the request path, a request will be sent to one of our Upstream services. The upstream service can divide the service into some versions, and divide it into different instance groups according to the version. It can be understood that, as shown in the figure, a service is divided into two instance groups according to the version. The first two endpoints belong to the v1 version and will be placed in subcluster a. The other two service instances c and d belong to the v2 version and will be placed in subcluster b. The traffic lane just demonstrated is essentially to pick the subcluster. Specifically, whether it wants to hit the v1 pool or the v2 pool, this is what it does. The general link to do this, first of all, our Kitex client will initiate a request. When it initiates a request, we will have the corresponding Middleware, which will first pass through an xDS-routed Middleware. What it needs to do is the routing rules configured on the root control plane, and the configuration rules are obtained from here. We will go to sync in real time. After obtaining the governance rules, it knows that the request needs to be routed to subcluster a or subcluster b. In fact, there are corresponding definitions. After it gets this information, it knows that the request is to be routed to subcluster a, and then passes through a series of middleware. For example, in these middleware, it can integrate Istio with governance logic, such as fuse, timeout and dynamic load balancing. What the resolve middleware has to do is: because we have selected a pool, there are many service instances in the pool, which service instance to send the traffic to, is actually what resolve does. It will do a load balance and accurately select a certain service instance. instances. After resolve, you can get an accurate endpoint and send some requests. Then its request will be accurately sent to endpoint a, which is a complete traffic routing based on Proxyless to realize a full link lane. Briefly share the implementation principle of the underlying traffic routing.

Standard Sidecar mode

This is the form of ServiceMesh in Proxyless mode. The familiar ServiceMesh form is actually this standard Sidecar. We will inject a Proxy into each service, and this Proxy is implemented based on envoy.

What it does, first of all, the container will have some, whether it is iptables or eBPF, will help us dynamically hijack and redirect incoming traffic to our Proxy. After the Proxy finishes processing, the request is redirected to the business container, and the outbound traffic sent by the business container will also be redirected to the Proxy first.

It is equivalent to its inbound and outbound will go through the agent again, which is equivalent to adding a layer of scalable middleware mechanism to the business process and business logic. The middle layer logic built by all our queues, such as indicators, tracing, governance, There is also security, which can be converged to this Proxy to achieve it. This is very similar to the idea of ​​Middleware, except that Middleware and business logic are in the same process. The control plane is based on the standard xDS, and the control plane is based on the standard Istiod. This is the standard sidecar mode that is familiar or widely used in the industry.

eBPF Kernel Mesh Mode (cilium)

There is also a third mode, which may be relatively new, based on the eBPF kernel to implement the kernel Mesh mode. Its so-called kernel Mesh is actually that the traffic will pass through us first.

Briefly introduce the capabilities that eBPF can achieve. It can actually help us directly go to some system functions in the kernel mode to do some corresponding stubs. You can dynamically insert some logic you want to execute. For example, if I want to execute some L3 and L4 traffic proxies, I can use eBPF to implement it, so that I don’t need to use envoy to do L3 and L4 The load-balancing can be done directly in the kernel. Of course, there are some things that cannot be done in the kernel, and we will fallback them to an envoy in user mode.

This is a set of eBPF-based kernel Mesh mode currently implemented by the cilium community. First, the eBPF Native layer will be responsible for L3 and L4 traffic, or some canary or topology-aware routing, multi-cluster support, and support for security and observability. It can be seen that many of our capabilities can be sunk into the kernel, and do not need to be done in the envoy in user mode. Of course, there are some properties that are strongly bound to the user mode, such as L7's load-balancing, which is strongly perceived by the framework or protocol brought by the user. It is difficult to implement in the kernel mode, or the cost of implementing it in the kernel mode It will be relatively high, so I still tend to fallback it to user mode, including our L7 Rate Limiting and TLS termination capabilities. This is a kernel-based Mesh mode.

The reason for doing this is mainly due to performance considerations, because the Sidecar mode will inevitably bring about some delay increase. Although with the iteration of Istio and envoy itself, the performance gap will become smaller and smaller. So the question is, what exactly is ServiceMesh? Now it seems that there are various models and various combinations.

This is indeed a question that deserves everyone's consideration.

ServiceMesh —— independent mode, standard abstraction of communication infrastructure between services

My personal thinking is that ServiceMesh does not actually care about the details of the specific mode implementation, it is actually an abstraction of a layer of communication infrastructure. You can understand that at the earliest time, our business code wanted to achieve observability, traffic routing, security or resilience, which had to be done in our own business code or in the Middleware of our own business framework.

In fact, the framework helps us do a layer of decoupling, so that the business does not need to be written in the business function, but in the specific business logic, it is abstracted into the Middleware provided by the framework. I understand that it is actually a layer of business code and some separation of infrastructure. The core of Mesh is actually to sink the general standard infrastructure capabilities, which can be sinked to our Middleware, or a Sidecar, or even sink to the kernel Mesh. In fact, their purpose or original intention, as I understand it, is actually the same. It is just a technical means, or the difference in the selection of different technologies according to different scenarios. The essence or goal is the same. This is my personal understanding of ServiceMesh.

ServiceMesh —— to build a service governance platform based on unified standards

Based on this understanding, we can actually do some interesting things. For example, there is a set of unified standards and a unified abstraction. In fact, we can unify the control plane or the rule model of service governance to unify various heterogeneous frameworks or heterogeneous platforms. The service governance rule models that describe themselves may be different, and can be unified, including the control plane, which can also be converged together, without using different frameworks or systems, and using different control planes. In fact, the upper-level operation and maintenance personnel or users will have a sense of division, including the configuration of these governance rules.

How to dynamically distribute, in fact, xDS is also a relatively popular and standard set of general protocols in the community. Just mentioned the heterogeneous data surface, there are actually many kinds, what we have to do is to be compatible with it, or make it fit the ecology of our entire system, such as the standard Proxy mode, Proxyless mode, and Just mentioned the kernel Mesh mode, all of them can be unified into a unified service grid system. The above is my personal understanding based on ServiceMesh. Finally, there are some links related to our project. Interested students can click on the link to GitHub to see some corresponding projects.


project address

  • GitHub:https://github.com/cloudwego

  • Official website: www.cloudwego.io

  • [CSG Phase 4] Interpretation of CloudWeGo business practice cases begins

Activity link: https://github.com/cloudwego/community/issues/58

picture

The country's first IDE that supports multi-environment development——CEC-IDE Microsoft has integrated Python into Excel, and Uncle Gui participated in the framework formulation. Chinese programmers refused to write gambling programs and were pulled out 14 teeth, with 88% body damage . Podman Desktop, an open-source imitation Song font, breaks through 500,000 downloads. Automatically skips opening screen advertisements. The application "Li Tiao Tiao" stops updating indefinitely. There is a remote code execution vulnerability Xiaomi filed mios.cn website domain name
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4843764/blog/8601037