Big analysis of openGemini time series database: query engine framework

This article mainly analyzes the openGemini framework from the source code level, and analyzes the internal structure of one of the operators (StreamAggregateTransform) through an example of an aggregation function, which is helpful for the development of new operators.

openGemini query engine framework

The framework of openGemini's query engine is shown in the figure and is divided into two parts: query statement compilation system and query statement execution system.

Build system

Http Interface , monitor client requests. openGemini provides HTTP RESTFULL interface to the outside world, and the client sends the query statement

SELECT count(water_level) 
FROM h2o_feet
WHERE time > now()-1h

Internally converted into HTTP messages and sent to the server (ts-sql), calling the "/query" interface

curl -i -XPOST "http://localhost:8086/query" -k --insecure --data-urlencode "q=SELECT count(water_level)  FROM h2o_feet WHERE time > now()-1h"

The entrance is in open_src/influx/httpd/handler.go, and the h.serveQuery function is responsible for processing requests. The core code is as follows:

func NewHandler(...) {
	...
	h.AddRoutes([]Route{
   		...
   		Route {
        	"query", // Query serving route.
          	"GET", "/query", true, true, h.serveQuery,
   		},
   		Route{
          	"query", // Query serving route.
          	"POST", "/query", true, true, h.serveQuery,
   		},
   		...
	}
	...
}

Parser And Compile , performs legality check, lexical analysis, and syntax analysis on the query statement, compiles and generates an abstract syntax tree (AST). If the newly added function is not registered, it will be detected by the compileFunction function here, and an error will be reported. The specific function call  ERR: undefined function xxxstack as follows:

 0  0x0000000000ecbe20 in github.com/openGemini/openGemini/open_src/influx/query.(*compiledField).compileFunction
    at ./open_src/influx/query/compile.go:454
 1  0x0000000000ecb3cb in github.com/openGemini/openGemini/open_src/influx/query.(*compiledField).compileExpr
    at ./open_src/influx/query/compile.go:369
 2  0x0000000000eca26b in github.com/openGemini/openGemini/open_src/influx/query.(*compiledStatement).compileFields
    at ./open_src/influx/query/compile.go:272
 3  0x0000000000ec9dc7 in github.com/openGemini/openGemini/open_src/influx/query.(*compiledStatement).compile
    at ./open_src/influx/query/compile.go:212
 4  0x0000000000ec9465 in github.com/openGemini/openGemini/open_src/influx/query.Compile
    at ./open_src/influx/query/compile.go:129
 5  0x0000000000edaae5 in github.com/openGemini/openGemini/open_src/influx/query.Prepare
    at ./open_src/influx/query/select.go:125
 6  0x00000000011f28f0 in github.com/openGemini/openGemini/engine/executor.Select
    at ./engine/executor/select.go:49

Logical Plan Builder , based on the logical operators and logical algebra designed in the time series database, generates the corresponding logical plan from the abstract syntax tree. This is equivalent to assembling independent logical operators (such as LogicalAggregate, LogicalLimit, LogicalJoin) together to jointly complete the functions expected by the query statement. Each logical operator corresponds to a physical operator (which can be understood as a functional entity of real calculation). All aggregate functions, such as count, min, max, and mode, belong to logical operators. The corresponding LogicalAggregatephysical LogicalAggregateoperators  are StreamAggregateTransformStreamAggregateTransformThen implement the functions corresponding to count, min, and max respectively.

The place where query plans are related to logical operators is engine/executor/logic_plan.go

func (b *LogicalPlanBuilderImpl) Aggregate() LogicalPlanBuilder {
	last := b.stack.Pop()
	plan := NewLogicalAggregate(last, b.schema)   // 在这里
	b.stack.Push(plan)
	return b
}

The call stack is as follows:

 0  0x00000000011a8069 in github.com/openGemini/openGemini/engine/executor.NewLogicalAggregate
    at ./engine/executor/logic_plan.go:273
 1  0x00000000011b5d55 in github.com/openGemini/openGemini/engine/executor.(*LogicalPlanBuilderImpl).Aggregate
    at ./engine/executor/logic_plan.go:2235
 2  0x00000000011f57c5 in github.com/openGemini/openGemini/engine/executor.buildAggNode
    at ./engine/executor/select.go:343
 3  0x00000000011f5a35 in github.com/openGemini/openGemini/engine/executor.buildNodes
    at ./engine/executor/select.go:355
 4  0x00000000011f5fa8 in github.com/openGemini/openGemini/engine/executor.buildQueryPlan
    at ./engine/executor/select.go:424
 5  0x00000000011f6134 in github.com/openGemini/openGemini/engine/executor.buildExtendedPlan
    at ./engine/executor/select.go:431
 6  0x00000000011f32b5 in github.com/openGemini/openGemini/engine/executor.(*preparedStatement).BuildLogicalPlan
    at ./engine/executor/select.go:140
 7  0x00000000011f37cc in github.com/openGemini/openGemini/engine/executor.(*preparedStatement).Select
    at ./engine/executor/select.go:173
 8  0x00000000011f2a0f in github.com/openGemini/openGemini/engine/executor.Select
    at ./engine/executor/select.go:63

The place where logical operators are related to physical operators is in engine/executor/agg_transform.go

147: var _ = RegistryTransformCreator(&LogicalAggregate{}, &StreamAggregateTransformCreator{})

This is a register that is executed when the system starts and binds LogicalAggregate to StreamAggregateTransform. Other logical operators and physical operators are also bound in the same way, such as:

var _ = RegistryTransformCreator(&LogicalGroupBy{}, &GroupByTransformCreator{})

Optimizer , a heuristic optimizer, performs rule-based optimization and reorganization of logical plans according to the algebraic rules of the design. The new aggregation operator is not involved for the time being.

execution system

DAG Builder , generates a physical plan from the optimized logical plan. It can be simply understood as replacing logical operators with corresponding physical operators (transformers). The dependence between physical operators is represented by a DAG (directed acyclic graph).

Transformers , a collection of all physical execution operators. Each Transformer has input and output. The input here can be the original data or the output of other Transformers. The output here is the calculation result of the Transformer and can be used as the previous layer Transformer. input or directly return the client's result data.

Pipeline Executor uses pipeline mode to execute subtasks in the physical plan, and uses DAG Scheduler to schedule tasks to ensure the dependencies between task execution.

DAG Scheduler uses the work-stealing algorithm to schedule the physical plan (DAG) to ensure maximum task concurrency at any time.

Traditional execution plans are executed serially. The purpose of openGemini introducing DAG is to allow each physical operator in the query plan to be parallelized as much as possible to improve query efficiency. This technology is used in ClickHouse and Flink. .

As shown in the figure, a physical plan generated by DAG Builder consists of 6 Transformers A, B, C, D, E, and F. Among them, B and C depend on A, but do not depend on each other. Then when the execution of A ends, B and C can be executed in parallel. The task of DAG Scheduler is to efficiently schedule the before and after execution relationships and concurrent processing of all Transformers, which is a very complex matter. openGemini cleverly uses the Goroutine and Channel features of the Go language to perfectly solve the task scheduling problem. The specific method is to traverse the DAG and create a one-way Channel for each edge as a data transmission channel. The two Transformers on the edge, such as Transformer A and Transformer B, Transformer A writes the calculation results to the Channel, and Transformer B gets the data from the Channel. If there is no data in the Channel, Transformer B is blocked. In this way, the scheduler can run all Transformers concurrently, and automatically put the later Transformers in a blocked state through the characteristics of the Channel. Once there is data in the Channel, it will be immediately scheduled and executed by Golang.

The physical plan is mainly executed by the PipelineExecutor object. The core code is in engine/executor/pipeline_executor.go. The core method is Executor(). The code is as follows:

func (exec *PipelineExecutor) Execute(ctx context.Context) error {
	...
	for _, p := range exec.processors {
		go func(processor Processor) {
			err := exec.work(processor)
			if err != nil {
				once.Do(func() {
					processorErr = err
					statistics.ExecutorStat.ExecFailed.Increase()
					exec.Crash()
				})
			}
			processor.FinishSpan()
			wg.Done()
		}(p)
	}
	...
}

func (exec *PipelineExecutor) work(processor Processor) error {
	...
	err := processor.Work(exec.context)
	if err != nil {
		...
	}
	return err
}

Processor in the code is an interface, similar to a base class. Each Transformer object implements the work interface of the base class. This work function is the entrance to each Transformer. processor.Work() is equivalent to starting the Transformer.

The Channel between Transformers is completed when PipelineExecutor is initialized.

func (exec *PipelineExecutor) init() {
	exec.processors = make(Processors, 0, len(exec.dag.mapVertexToInfo))
	for vertex, info := range exec.dag.mapVertexToInfo {
		for i, edge := range info.backwardEdges {
			_ = Connect(edge.from.transform.GetOutputs()[0], edge.to.transform.GetInputs()[i])
		}
		exec.processors = append(exec.processors, vertex.transform)
	}
}

Analysis of the working principle of StreamAggregateTransform

Here we take the aggregation operator as an example to analyze its internal implementation to help understand how to develop a physical operator or add new functions inside the operator.

StreamAggregateTransform carries the specific implementation of all aggregate functions within openGemini. Let’s first look at its structure.

type StreamAggregateTransform struct {
	BaseProcessor
	
	init                 bool   //判断是否开始处理数据,在Transform退出之前检查该标志,以清空缓存
	sameInterval         bool   //
	prevSameInterval     bool
	prevChunkIntervalLen int
	bufChunkNum          int    //缓存最大容量
	proRes               *processorResults  //上一轮处理结果缓存
	iteratorParam        *IteratorParams   //迭代器参数
	chunkPool            *CircularChunkPool  //存放结果数据的池子
	newChunk             Chunk  //承载从chunkpool中拿到的chunk. trans.newChunk = trans.chunkPool.GetChunk()
	nextChunkCh          chan struct{} //用于事件通知,通过写入通知数据接收协程继续接收数据到缓存中
	reduceChunkCh        chan struct{} //用于事件通知,通过写入通知数据处理协程从缓存中拿数据处理
	bufChunk             []Chunk  //数据缓存,从Inputs中读取待处理数据,缓存在bufChunk中。
	Inputs               ChunkPorts //Transformer的输入管道
	Outputs              ChunkPorts //Transformer的输出管道
	opt                  *query.ProcessorOptions
	aggLogger            *logger.Logger
	postProcess          func(Chunk)

	span        *tracing.Span //用于信息统计使用,比如Explain命令
	computeSpan *tracing.Span //用于信息统计使用,比如Explain命令

	errs errno.Errs
}

StreamAggregateTransform inner workings

image-20230906153853333

As shown in the figure, StreamAggregateTransform will create a ProcessorResults object during initialization. In ProcessorResults, a series of iterators (such as FloatColIntegerIterator, StringColIntegerIterator) and data processing methods (such as FloatCountReduce and IntegerCountMerge) will be registered according to different aggregation functions. Each iterator (Iterator) has a Next() method. This method is the specific implementation of the aggregate function function, which uses pre-registered data processing methods (such as FloatCountReduce and IntegerCountMerge).

The entrance of StreamAggregateTransform is the Work() function. The core function starts the running and reduce modules concurrently. Running is responsible for fetching data, and reduce is responsible for calling the previously registered iterator to process the aggregated data and return it.

Source code analysis of how StreamAggregateTransform works

StreamAggregateTransform is initialized in engine/executor/agg_transform.go. The core code is as follows:

func NewStreamAggregateTransform(
	inRowDataType, outRowDataType []hybridqp.RowDataType, exprOpt []hybridqp.ExprOptions, opt *query.ProcessorOptions, isSubQuery bool,
) (*StreamAggregateTransform, error) {
	...
	trans := &StreamAggregateTransform{
		opt:           opt,
		bufChunkNum:   AggBufChunkNum,
		Inputs:        make(ChunkPorts, 0, len(inRowDataType)),
		Outputs:       make(ChunkPorts, 0, len(outRowDataType)),
		bufChunk:      make([]Chunk, 0, AggBufChunkNum), //这里规定了bufChunk的容量
		nextChunkCh:   make(chan struct{}),
		reduceChunkCh: make(chan struct{}),
		iteratorParam: &IteratorParams{},
		aggLogger:     logger.NewLogger(errno.ModuleQueryEngine),
		chunkPool:     NewCircularChunkPool(CircularChunkNum, NewChunkBuilder(outRowDataType[0])),
	}
	...
    //这里初始化的trans.proRes对象,NewProcessors()完成了各类迭代器的注册,新增聚合函数需要在这里实现迭代器方法和注册
	trans.proRes, err = NewProcessors(inRowDataType[0], outRowDataType[0], exprOpt, opt, isSubQuery)
	...
}

The core code of Work() is as follows:

func (trans *StreamAggregateTransform) Work(ctx context.Context) error {
	...
	for i := range trans.Inputs {
		go trans.runnable(i, ctx, errs)  //每个Input对应一个runing协程,实则只有一个Input,预留了多个而已
	}

	go trans.reduce(ctx, errs)  //开一个reduce协程

	return errs.Err()
}

The core code of running and reduce modules is as follows:


func (trans *StreamAggregateTransform) runnable(in int, ctx context.Context, errs *errno.Errs) {
	...
	trans.running(ctx, in)
}
//running协程
func (trans *StreamAggregateTransform) running(ctx context.Context, in int) {
	for {
		select {
		case c, ok := <-trans.Inputs[in].State:
			...
			trans.init = true
			trans.appendChunk(c)  //放入缓存
			if len(trans.bufChunk) == trans.bufChunkNum {  //缓存满了就通知reduce处理数据
				trans.reduceChunkCh <- struct{}{}
			}
			<-trans.nextChunkCh //放一个chunk到缓存,等待reduce消费的信号
			...
		case <-ctx.Done():
			return
		}
	}
}
//reduce协程
func (trans *StreamAggregateTransform) reduce(_ context.Context, errs *errno.Errs,) {
	...
	// return true if transform is canceled
	reduceStart := func() {
		<-trans.reduceChunkCh
	}

	// return true if transform is canceled
	nextStart := func() {
		trans.nextChunkCh <- struct{}{}
	}

	trans.newChunk = trans.chunkPool.GetChunk()  //从池子中分配一片空间
	for {
		nextStart() //通知running继续往缓存放数据
		reduceStart() //消费管道里的事件通知,如果running还没给消息,阻塞在这里等待

        c := trans.nextChunk() //从缓存中取一个chunk(数据处理单位),1个chunk包含多行(rows)数据
		...
		tracing.SpanElapsed(trans.computeSpan, func() {
			trans.compute(c)  //在compute方法中处理chunk数据,最终会调到具体的一个迭代器来完成
			...
		})
	}
}

The running module mainly completes reading data from Input, putting it into cache, and notifying Reduce for processing. Reduce is responsible for calling specific iterators to process data. Event notification is carried out between Running and Reduce through two one-way pipes (nextChunkCh and reduceChunkCh).

Source code analysis of iterator and processing method registration of aggregate functions

The implementation of NewProcessors() is in engine/executor/call_processor.go, which mainly completes the registration of iterators and data processing methods of each aggregate function. New aggregate functions need to implement the iterator method and registration here . The core code is as follows:

func NewProcessors(inRowDataType, outRowDataType hybridqp.RowDataType, exprOpt []hybridqp.ExprOptions, opt *query.ProcessorOptions, isSubQuery bool) (*processorResults, error) {
	...
	for i := range exprOpt {
		...
		switch expr := exprOpt[i].Expr.(type) {
		case *influxql.Call:
			...
			name := exprOpt[i].Expr.(*influxql.Call).Name
			switch name {
			case "count":
                //聚合函数count走这里,在NewCountRoutineImpl方法中注册
				routine, err = NewCountRoutineImpl(inRowDataType, outRowDataType, exprOpt[i], isSingleCall)
				coProcessor.AppendRoutine(routine)
			case "sum":
				routine, err = NewSumRoutineImpl(inRowDataType, outRowDataType, exprOpt[i], isSingleCall)
				coProcessor.AppendRoutine(routine)
			case "first":
				...
			case "last":
				...
			case "min":
				...
			case "max":
				...
			case "percentile":
				...
			case "percentile_approx", "ogsketch_percentile", "ogsketch_merge", "ogsketch_insert":
				...
			case "median":
				...
			case "mode":
				...
			case "top":
				..
			case "bottom":
				...
			case "distinct":
				...
			case "difference", "non_negative_difference":
				...
			case "derivative", "non_negative_derivative":
				...
			case "elapsed":
				...
			case "moving_average":
				...
			case "cumulative_sum":
				...
			case "integral":
				...
			case "rate", "irate":
				...
			case "absent":
				...
			case "stddev":
				...
			case "sample":
				...
			default:
				return nil, errors.New("unsupported aggregation operator of call processor")
			}
		...
	}
	...
}

The implementation of the NewCountRoutineImpl method is in engine/executor/call_processor.go. The core code is as follows:

func NewCountRoutineImpl(inRowDataType, outRowDataType hybridqp.RowDataType, opt hybridqp.ExprOptions, isSingleCall bool) (Routine, error) {
	...
	dataType := inRowDataType.Field(inOrdinal).Expr.(*influxql.VarRef).Type
	switch dataType {
	case influxql.Integer:  //聚合的字段是整型
		return NewRoutineImpl(
			NewIntegerColIntegerIterator(IntegerCountReduce, IntegerCountMerge, isSingleCall, inOrdinal, outOrdinal, nil, nil), inOrdinal, outOrdinal), nil
	case influxql.Float:  //聚合的字段是浮点型
		return NewRoutineImpl(
			NewFloatColIntegerIterator(FloatCountReduce, IntegerCountMerge, isSingleCall, inOrdinal, outOrdinal, nil, nil), inOrdinal, outOrdinal), nil
	case influxql.String: //聚合的字段是字符串
		return NewRoutineImpl(
			NewStringColIntegerIterator(StringCountReduce, IntegerCountMerge, isSingleCall, inOrdinal, outOrdinal, nil, nil), inOrdinal, outOrdinal), nil
	case influxql.Boolean: //聚合的字段是布尔型
		return NewRoutineImpl(
			NewBooleanColIntegerIterator(BooleanCountReduce, IntegerCountMerge, isSingleCall, inOrdinal, outOrdinal, nil, nil), inOrdinal, outOrdinal), nil
	}
}

Because different data types have different data processing methods, different iterators must be adapted. For example, there is the following data:

h2o_feet,location=coyote_creek water_level=8.120,description="between 6 and 9 feet" 1566000000000000000
h2o_feet,location=coyote_creek water_level=8.005,description="between 6 and 9 feet" 1566000360000000000
h2o_feet,location=coyote_creek water_level=7.887,description="between 6 and 9 feet" 1566000720000000000
h2o_feet,location=coyote_creek water_level=7.762,description="between 6 and 9 feet" 1566001080000000000
h2o_feet,location=coyote_creek water_level=7.635,description="between 6 and 9 feet" 1566001440000000000

Check for phrases

1. SELECT count(water_level) FROM h2o_feet
2. SELECT count(description) FROM h2o_feet

If water_level is a floating point type, use FloatColIntegerIterator. If it is description, use StringColIntegerIterator.

Each iterator must contain a Reduce and Merge method. What exactly does it do? Take FloatCountReduce and IntegerCountMerge as examples

/*
c,包含待处理数据的chunk
ordinal, 指定chunk中的数据列
star,表示chunk中数据分组的起始位置。比如查询是按时间分组再统计
end,表示chunk中数据分组的结束位置。比如查询是按时间分组再统计。如果查询没有分组,那么chunk中所有数据都属于一个分组
*/
func FloatCountReduce(c Chunk, ordinal, start, end int) (int, int64, bool) {
	var count int64
    //ordinal指定要计算chunk中第几列数据。如果指定列的数据都没有空值,直接用end-start快速计算count值
	if c.Column(ordinal).NilCount() == 0 {
		// fast path
		count = int64(end - start)
		return start, count, count == 0
	}

	//如果数据列中有空值,就要换一种计算方法
	vs, ve := c.Column(ordinal).GetRangeValueIndexV2(start, end)
	count = int64(ve - vs)
	return start, count, count == 0
}
//merge方法是做一个累计求和,把上一个chunk中统计的count值 + 当前求得的count值。因为chunk中数据量是一定的,要统计的数据可能分成了很多个chunk
func IntegerCountMerge(prevPoint, currPoint *IntegerPoint) {
	...
	prevPoint.value += currPoint.value
}

Chunk data structure analysis

Chunk can be viewed as a table, which is the smallest unit of data transmission and data processing between Transformers, and is an important carrier of vectorization.

The Chunk structure is defined as follows:

type ChunkImpl struct {
	rowDataType   hybridqp.RowDataType
	name          string
	tags          []ChunkTags
	tagIndex      []int
	time          []int64
	intervalIndex []int
	columns       []Column
	dims          []Column
	*record.Record
}

For example, using the same data:

h2o_feet,location=coyote_creek water_level=8.120,description="between 6 and 9 feet" 1566000000000000000
h2o_feet,location=coyote_creek water_level=8.005,description="between 6 and 9 feet" 1566000360000000000
h2o_feet,location=coyote_creek water_level=7.887,description="between 6 and 9 feet" 1566000720000000000
h2o_feet,location=coyote_creek water_level=7.762,description="between 6 and 9 feet" 1566001080000000000
h2o_feet,location=coyote_creek water_level=7.635,description="between 6 and 9 feet" 1566001440000000000

Check for phrases

> SHOW TAG KEYS
name: h2o_feet
tagKey
------
location

> SELECT count(water_level) FROM h2o_feet WHERE time >= 1566000000000000000 AND time <= 1566001440000000000 GROUP By time(12m)
name: h2o_feet
time                count
----                -----
1566000000000000000 2
1566000720000000000 2
1566001440000000000 1

Extract one of the Chunks received by StreamAggregateTransform and view the specific internal composition.

Iterator Next method analysis

Taking FloatColIntegerIterator as an example, in engine/executor/agg_iterator.gen.go, the core code of the Next() method is as follows:

func (r *FloatColIntegerIterator) Next(ie *IteratorEndpoint, p *IteratorParams) {
    //inChunk存储待处理数据,即StreamAggregateTransform接收到的Chunk最终在这里被处理,outChunk存储处理后的结果数据
	inChunk, outChunk := ie.InputPoint.Chunk, ie.OutputPoint.Chunk
    //inOrdinal是一个位置信息,初始为0,遍历chunk中的所有列。当前例子中chunk只有1列
    //这里判断列是否为空,如果为空,需要给这些列数据用nil填充
	if inChunk.Column(r.inOrdinal).IsEmpty() && r.prevPoint.isNil {
		var addIntervalLen int
		if p.sameInterval {
			addIntervalLen = inChunk.IntervalLen() - 1
		} else {
			addIntervalLen = inChunk.IntervalLen()
		}
		if addIntervalLen > 0 {
			outChunk.Column(r.outOrdinal).AppendManyNil(addIntervalLen)
		}
		return
	}

	var end int
	firstIndex, lastIndex := 0, len(inChunk.IntervalIndex())-1
    //[start,end]代表一个分组数据,分组数据是chunk数据的一个子集
	for i, start := range inChunk.IntervalIndex() {
		if i < lastIndex {
			end = inChunk.IntervalIndex()[i+1]
		} else {
			end = inChunk.NumberOfRows()
		}
        //fn为FloatCountReduce方法
		index, value, isNil := r.fn(inChunk, r.inOrdinal, start, end)
		if isNil && ((i > firstIndex && i < lastIndex) ||
			(firstIndex == lastIndex && r.prevPoint.isNil && !p.sameInterval) ||
			(firstIndex != lastIndex && i == firstIndex && r.prevPoint.isNil) ||
			(firstIndex != lastIndex && i == lastIndex && !p.sameInterval)) {
			outChunk.Column(r.outOrdinal).AppendNil()
			continue
		}
        //这里主要是处理三种情况
        //1. 分组数据跨两个chunk,当前chunk的前面部分数据与上一个chunk的最后部分数据属于一个分组,
        // 当前计算结果需要加前一个数据的结果,调用fv: FloatCountmerge方法
        //2. 分组数据跨两个chunk,当前chunk的后面部分数据与下一个chunk的最前面部分数据属于一个分组,
        // 当前计算结果需要缓存起来
        //3. 分组数据不跨chunk,计算结果为最终结果
		if i == firstIndex && !r.prevPoint.isNil {
			r.processFirstWindow(inChunk, outChunk, isNil, p.sameInterval,
				firstIndex == lastIndex, index, value)
		} else if i == lastIndex && p.sameInterval {
			r.processLastWindow(inChunk, index, isNil, value)
		} else if !isNil {
			r.processMiddleWindow(inChunk, outChunk, index, value)
		}
	}
}

Summarize

This article shows part of the core code of openGemini, provides an in-depth analysis of the query engine framework, and introduces the working principle of StreamAggregateTransform in detail. I hope it will be helpful to everyone when reading the source code and using the database.


openGemini official website: http://www.openGemini.org

openGemini open source address: https://github.com/openGemini

openGemini public account:

Welcome to pay attention~ We sincerely invite you to join the openGemini community to build, govern and share the future together!

The author of the open source framework NanUI switched to selling steel, and the project was suspended. The first free list in the Apple App Store is the pornographic software TypeScript. It has just become popular, why do the big guys start to abandon it? TIOBE October list: Java has the biggest decline, C# is approaching Java Rust 1.73.0 Released A man was encouraged by his AI girlfriend to assassinate the Queen of England and was sentenced to nine years in prison Qt 6.6 officially released Reuters: RISC-V technology becomes the key to the Sino-US technology war New battlefield RISC-V: Not controlled by any single company or country, Lenovo plans to launch Android PC
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/3234792/blog/10109606