Golang eight-part essay compilation

Table of contents

Why was the Go language invented?

The background of the Go language: This language was invented by three employees of Google. When they worked at Google, they had to handle a large number of user requests and the concurrency was very high. However, the C++ language they used at the time felt that this language was not easy to use. It's very complicated, as you can imagine. After all, C++ must have become more and more bloated after so many years of development, so at that time they wanted to invent a new language to solve the main problems they encountered in their daily work. Go Language emerged.

  • Ken Thompson: The oldest and the most accomplished. He once invented the B language and C language, and he also rewrote the Unix operating system. Later, he also won the Turing Award.
  • Rob Pike: Is the person in charge of the entire Go language. pushing the whole thing forward. He and Ken Thompson jointly invented the UTF-8 character set, and his wife also helped design the mascot for the Go language.
  • Robert Griesemer: He has participated in the development of JavaScript engine and Java virtual machine hotspot development.

Therefore, the application scenarios of Go language processing are mainly like Google, which is better at handling high concurrency situations in server-side applications and requires low latency.

One of the principles behind the design of this language is: simplicity, not only simple syntax, but also the entire process of use. Every stage must be simple and efficient. This is the design philosophy of the Go language.

  • Typical application scenarios: high concurrency and low latency on the server side
  • Design philosophy: simplicity
  • Go language features:
    • Strongly typed static languagevar age int
    • Simple syntax
    • Garbage collection
    • Fast compilationgo build
    • Simple dependency managementgo mod
    • Excellent concurrent processing capabilitiesgoroutine

Process, thread, coroutine

  1. Process:
    • A process is the smallest unit for resource allocation by the operating system and has independent memory space and system resources.
    • Processes are usually independent of each other and cannot directly access each other's memory.
    • Process startup, termination and switching overhead are relatively large.
    • Example: In an operating system, each running application is usually an independent process. For example, browsers, music players, and text editors can all run as independent processes.
  2. Thread:
    • Threads are execution units within a process and share the memory space and system resources of the process.
    • Threads can communicate with each other relatively easily because they share the same memory.
    • Thread starting, termination and switching overhead are small.
    • For example: In a multi-threaded program, there can be one thread for graphical interface processing and another for user file downloading. They can share data without complex communication mechanisms.
  3. Coroutine:
    • Coroutines are lightweight threads that process cooperative multitasking at the code level.
    • Coroutines are usually user-level rather than operating system-level. They are controlled by developers and are more suitable for handling I/O-intensive tasks and high concurrency.
    • Coroutines can be paused and resumed at any time without thread context switching.
    • For example: For example, a web crawler can use coroutines to handle the download of multiple pages so that it can switch to other tasks while waiting for network responses without wasting time waiting.
  • the difference:
    • The process is at the operating system level, has an independent memory space, and is expensive to start and switch. Threads are execution units within a process, share the memory of the process, and have low startup and switching overhead. Coroutines are more lightweight threads, controlled by developers, and are suitable for handling high-concurrency and I/O-intensive tasks.
    • Data cannot usually be shared directly between processes, but between threads, and coroutines can be suspended and resumed at any time without explicit data sharing and synchronization.
    • Processes are typically used for parallel computing on multi-core CPUs, threads are used for multitasking, and coroutines are used for asynchronous programming and high-concurrency applications.

Goroutine scheduling principle

The essence of goroutine scheduling is to put Goroutine on the CPU for execution according to a certain algorithm.
In Go, coroutines are lightweight user-level threads that do not run directly on operating system threads. Go uses a special scheduler (Schedule), which is responsible for managing and allocating coroutines to operating system threads for concurrent execution.

  1. Coroutines are the concurrent execution units of the Go language. It is a user-level thread and is very lightweight.
  2. The Go scheduler (Schedule) is responsible for managing the execution of coroutines. The scheduler will allocate multiple coroutines to a small number of operating system threads (usually the number of GOMAXPROCS), which are the execution units that actually run on the CPU.
  3. The operating system thread (OS Thread) is the underlying operating system-level thread responsible for running coroutines on the CPU. The Go scheduler will schedule the coroutine onto the operating system thread, and then run the operating system thread on the CPU so that the coroutine can be executed.

Supplement: Go's scheduler has automatic scalability and can create and destroy operating system threads as needed to adapt to the concurrency needs of the application. This allows Go to efficiently handle a large number of coroutines without wasting too many system resources.

Summary: Go's coroutines and schedulers make concurrent programming more containerized and more efficient because they hide many of the complexities of operating system-level thread management, allowing developers to focus on application logic.

Goroutine switching timing

  1. When the select operation is blocked
  2. io blocking
  3. Blocked in Channel
  4. Programmer shows coding operations
  5. Waiting for lock
  6. program call

Context structural principle

Context is a common concurrency control technology used in Golang application development. It can control a group of goroutines in a tree structure, and each goroutine has the same context. Context is concurrency-safe and is mainly used to control collaboration and cancellation operations between multiple coroutines.

type Context interface {
    
    
	Deadline() (deadline time.Time, ok bool)
	Done() <-chan struct{
    
    }
	Err() error
	Value(key interface{
    
    }) interface{
    
    }
}
  • "Deadline" method: You can get the set deadline. The return value deadline is the deadline. At this time, the Context will automatically initiate a cancellation request. The return value ok indicates whether the deadline is set.
  • "Done" method: Returns a read-only channel of type struct{}. If this chan can be read, it means that the cancellation signal has been sent, and you can perform cleanup operations, then exit the coroutine and release resources.
  • "Err" method: Returns the reason why the Context was canceled.
  • "Value" method: Get the value bound on the Context, which is a key-value pair. Get the corresponding value through key.

Several objects that implement the context interface:

  • context.Background() is similar to context.TODO(). The returned context generally exists as a root object and cannot exit or carry values. To use the function of context specifically, you need to derive a new context.
  • The context.WithCancel() function returns a subcontext and has a cancel exit method. The child context will exit in two situations, one is calling cancel, and the other is when the parent context in the parameter exits, both the context and its associated context exit.
  • The context.WithTimeout function specifies the timeout. When the timeout occurs, the subcontext will exit. Therefore, there are three opportunities for the child context to exit, one is the parent context exits; one is the timeout exit; the other is to actively call the cancel function to exit.
  • context.WithDeadline() is similar to the function returned by context.WithTimeout(), but its parameter specifies the final expiration time.
  • The context.WithValue() function returns the sub-context to be key-value

How Context works

The context mechanism allows to pass control signals and request scope values ​​ between coroutines, and security Cancel the operation. This is useful for managing concurrent operations and the lifecycle of resources. The working principle of context is based on the concurrency features and coroutine capabilities of the Go language, as well as the support of the scheduler.

  1. Create Context: You can use functions such as context.WithCancel, context.WithTimeout, context.WithDeadline, etc. to create a new context.Context object. This new Context contains a cancellation function, deadline, and other information that controls the behavior of the coroutine.
  2. Pass Context: You can pass this Context object to your functions or coroutines so that they can access the Context when needed. Typically, you'll accept a Context object as a function parameter.
  3. Cancel Context: If you need to cancel an operation under certain circumstances, you can call the cancellation function in Context. This willsend a cancellation signal, notifying all coroutines using this Context to stop their work. This is achieved via an internal chan.
  4. Listening to Context: Coroutines usually listen to the Context’s cancellation signal in a loop. Once the cancellation signal is received, they will stop Execute the job and exit.
  5. Chained Context: You can use the context.WithValue function to add additional key-value pairs to the Context to pass some request-scoped values ​​between coroutines.
  6. Deadlines and timeouts: If you create a Context using context.WithTimeout or context.WithDeadline, it will automatically cancel when the set deadline or timeout expires. This is useful for performing timeout operations.

Context usage scenarios

  1. RPC call
  2. Pipeline
  3. Timeout request
  4. Requests of HTTP servers transfer data to each other
  5. Cancel operations: You can use Context to cancel long-running operations. For example, in HTTP request processing, if the client interrupts the connection or times out, you can cancel related operations.
  6. Deadline and timeout: Context can set deadline or timeout to limit the execution time of the operation. This is useful for performing time-bound tasks, such as waiting for an operation to complete within a certain amount of time.
  7. Concurrency control: Context allows control of concurrency, and a Context can be shared among multiple coroutines to coordinate their behavior under certain conditions. For example, you can use a Context to limit the number of simultaneous database connections or HTTP requests.
  8. Request-scoped value passing: Request-scoped values ​​such as user authentication information, language preferences, etc. can be stored in the Context. These values ​​can be passed to different functions and services throughout request processing.
  9. Propagate tracing information: In a microservices architecture, you can use Context to pass tracing information, log identifiers, and other contextual information to track requests as they pass between different services.
  10. Prevent leaks: Context is inheritable, and child coroutines can be derived from the context of the parent coroutine. This is useful to prevent resource leaks, because when the parent coroutine cancels, all child coroutines will also be cancelled.
  11. Testing and mocking: When writing unit tests or mocks, you can use a Context to simulate cancellation, timeouts, or other conditions to test whether your code handles these situations correctly.
  12. HTTP requests and handling: When handling HTTP requests, it is common to associate a Context with each request to handle request cancellations, timeouts, and pass request scope information.

Golang’s memory allocation mechanism

Insert image description here

  1. Heap and stack: In Go, memory is divided into two main parts: heap and stack
    * Heap: The heap is used to store dynamic allocations memory place. Memory allocated in the heap usually needs to be released manually, otherwise it will cause a memory leak. In Go, the new or make function is used to allocate memory on the heap.
    * Stack: The stack is used to store local variables and call information of functions. The memory on the stack is automatically managed by the compiler and does not need to be released manually.
  2. Automatic memory management: A feature of the Go language is automatic memory management, that is, you do not need to manually release most of the memory. When you no longer use a variable or data structure, Go's garbage collector automatically recycles them to prevent memory leaks.

race problem

Race conditions are problems that can occur when multiple coroutines access shared data.

When using go build, go run, go test commands, add the -race flag to check whether there is resource competition in the code.

Race Condition: Race condition is a problem in concurrent programming that occurs when multiple goroutines try to access and modify shared data at the same time. This can lead to unpredictable behavior and data inconsistencies. Race conditions often exist because of a lack of proper synchronization mechanisms.

Example: Suppose there are two coroutines trying to increase the value of a variable at the same time: In this example, two coroutines modify the count variable at the same time. Without proper synchronization, the final result may be uncertain and may not be the expected 2000 . This is a race condition.

package main

import (
	"fmt"
	"sync"
)

var count int
var wg sync.WaitGroup

func increment() {
    
    
	for i := 0; i < 1000; i++ {
    
    
		count++
	}
	wg.Done()
}

func main() {
    
    
	wg.Add(2)
	go increment()
	go increment()
	wg.Wait()
	fmt.Println("Count:", count)
}

Solving race conditions:[GitHub application example]

  1. Use mutex locks (Mutex): Mutex locks are the most commonly used method to solve race conditions. It allows only one coroutine to access the critical section (shared data) at the same time, thus avoiding race conditions. In Go, you can use sync.Mutex to create a mutex.

    var mutex sync.Mutex
    
    func increment() {
          
          
        mutex.Lock()
        count++
        mutex.Unlock()
    }
    
  2. Using channels: Channels can be used to safely transfer data between coroutines to avoid race conditions. By sending and receiving data, you ensure that only one coroutine can modify the shared data.

    var ch = make(chan int)
    
    func increment() {
          
          
        ch <- 1
        count++
        <-ch
    }
    
  3. Using atomic operations: Go provides support for atomic operations, such as the functions in the sync/atomic package, which can perform atomic operations without the need for mutex locks.

    import "sync/atomic"
    
    var count int32
    
    func increment() {
          
          
        atomic.AddInt32(&count, 1)
    }
    

memory escape

A memory escape is when the compiler allocates a variable on the heap instead of the stack.

You can use the option -gcflags=-m when compiling to check the situation of variable escape

Memory Escape: Memory escape occurs when the compiler cannot determine the lifetime of a variable, and it may be allocated on the heap instead of the stack. This can cause performance issues because memory allocation and deallocation on the heap is expensive.

Example: In the following example, the lifetime of the x variable exceeds the scope of the function, so the compiler allocates it on the heap: Here the memory allocation of the x variable occurs on the heap because its lifetime cannot be determined at compile time Sure. This can lead to unnecessary heap allocations and additional garbage collection burden, impacting performance.

package main

func create() *int {
    
    
	x := 10
	return &x // 返回局部变量的指针
}

func main() {
    
    
	y := create()
	// 在这里,编译器无法确定 x 变量何时结束生命周期
	// 所以 y 指向的数据分配在堆上
}

To solve the memory escape problem:

  1. Avoid using pointers: If possible, avoid using pointers. In Go, local variables are usually allocated on the stack instead of the heap, so try not to return pointers to local variables to avoid memory escapes.
  2. Use value semantics: Use value semantics instead of reference semantics. In Go, value types are usually allocated on the stack, while reference types (e.g. slices, maps) may be allocated on the heap. Use value semantics whenever possible.
  3. Avoid unnecessary pointers: If an object does not need to be accessed outside the function, do not encapsulate it in a pointer. In Go, this will reduce the chance of memory escapes.
  4. Use appropriate data structures: Use appropriate data structures and algorithms to avoid memory escapes. Choose data structures and algorithms to minimize heap allocations.

golang memory alignment mechanism

In order to allow the CPU to access each field faster, the Go compiler will help you align the data of the struct structure. The so-called data alignment means that the memory address is an integral multiple of the size of the stored data (in bytes) so that the CPU can read the data from the memory at one time. The compiler achieves alignment by padding some white space between the fields of the structure.

The size and alignment values ​​occupied by different hardware platforms may be different. The compiler on each specific platform has its own default "alignment coefficient". The alignment coefficient is 4 for 32-bit systems and 4 for 64-bit systems. The alignment coefficient is 8
. Different types of alignment coefficients may also be different. Use the unsafe.Alignof function in the Go language to return the alignment coefficient of the corresponding type. The alignment coefficients all conform to the rule of 2^n. The maximum will not exceed 8

Alignment principles:

  • The offset of a structure member must be an integer multiple of the minimum of the member size and alignment factor.
  • The address of the entire structure must be an integer multiple of the minimum of the maximum field size and alignment factor.
  • struct{} (empty structure) does not increase alignment requirements, and it does not cause changes in field alignment in the structure. If placed in the last field of the structure, the field alignment must be based on the minimum value of the maximum byte and the compiler's default alignment coefficient.
type T2 struct{
    
    
	i8 int8
	i64 int64
	i32 int32
}

type T3 struct{
    
    
	i8 int8
	i32 int32
	i64 int64
}

Insert image description here

type C struct {
    
    
	a struct{
    
    }
	b int64
	c int64
}

type D struct {
    
    
	a int64
	b struct{
    
    }
	c int64
}

type E struct {
    
    
	a int64
	b int64
	c struct{
    
    }
}

type F struct {
    
    
	a int32
	b int32
	c struct{
    
    }
}

func main() {
    
    
	// 使用 Go 语言的 unsafe.Sizeof 函数来获取特定类型的大小
	fmt.Println(unsafe.Sizeof(C{
    
    })) // 16
	fmt.Println(unsafe.Sizeof(D{
    
    })) // 16
	fmt.Println(unsafe.Sizeof(E{
    
    })) // 24
  	fmt.Println(unsafe.Sizeof(F{
    
    })) // 12
}

What is the difference between new and make in golang?

In Go language, new and make are two built-in functions for allocating memory and creating different types of objects. Their main difference is their purpose and return value type.

  • new: Use new to create a pointer to a zero value, for value types.
    • Pointers used to create value types such as integers, floating point numbers, structures, etc. It assigns a zero value and returns a pointer to the newly allocated memory.
    • The return value of new is a pointer to zero value, which does not initialize the memory, so you get a zero-valued object.
      Example:
      var i *int
      i = new(int) // 创建一个 int 类型的指针,i 指向的值为 0
      
  • make: Use make to create initialized reference type objects, such as slices, maps, and channels.
    • Used to create instances of reference types such as slices, maps, channels, etc. It allocates and initializes memory and returns the initialized instance.
    • The return value of make is an initialized reference type object.
      Example:
      slice := make([]int, 5) // 创建一个包含 5 个整数的切片,每个元素为 0
      

The implementation principle of golang's slice

Slice in Go is a flexible data structure whose implementation principle is based on arrays. A sliceis essentially a structure that wraps an array, which references a portion of the array and can be resized dynamically. The following is the main implementation principle of slicing:

  1. Underlying array: A slice contains a pointer to the underlying array, as well as the length and capacity of the slice. The underlying array is the data storage source of the slice, and the operation of slicing is actually an operation on the underlying array.
  2. Length and Capacity: The length of a slice is the number of elements it contains, while the capacity is the number of elements that can be contained in the underlying array. When a slice is created, its capacity is usually the same as its length, but the capacity may change as the slice is manipulated.
  3. Dynamic resizing: Slices can be resized dynamically. When you append elements to a slice, if there is insufficient capacity, Go creates a new underlying array for the slice, copies the old data into the new array, and adds the new elements. This enables the slice to grow automatically.
  4. Reference subarray: A slice can reference any part of the underlying array by setting the starting index and length of the slice. This allows you to create sliced ​​views without duplicating data.
  5. Zero-valued slice: An uninitialized slice has the value nil and does not reference any underlying array.

Overall, slicing is a very convenient data structure that allows you to efficiently handle variable-length data while avoiding manual memory management. Slicing is a very powerful tool in situations where dynamic resizing is required because it automatically handles the copying and management of the underlying array.

type slice struct{
    
    
	array unsafe.Pointer
	len int
	cap int
}
  • slice: 24 bytes
  • array: pointer to the underlying array, occupying 8 bytes
  • len: length of slice, occupies 8 bytes
  • cap: Capacity of the slice, cap is always greater than or equal to len, occupying 8 bytes
  • To initialize slice, runtime.makeslice is called. The main job of the makeslice function is to calculate the memory size required by the slice, and then call mallocgc to allocate memory. The size of the required memory = the size of the elements in the slice * the capacity of the slice

The difference between array and slice in golang

  1. different lengths
    • Array initialization must specify the length, and the length is fixed.
    • The length of the slice is not fixed. Elements can be appended, which may increase the capacity of the slice.
  2. Function parameters are different
    • Arrays are value types. When assigning one array to another array, a deep copy is passed. The function parameter passing operation will copy the entire array data, which will occupy additional memory. Modifications to the array value within the function will not be modified. Original array content.
    • Slices are reference types. When assigning one slice to another slice, a shallow copy is passed. The function parameter passing operation will not copy the entire slice, but only copy len and cap. The bottom layer shares the same array and does not occupy additional space. memory, modification of the array element value within the function will modify the original array content.
  3. Different ways of calculating array length
    • The array needs to be traversed to calculate the array length, and the time complexity is O ( n ) O(n) O(n)
    • The bottom layer of the slice contains the len field. The slice length can be calculated through len(). The time complexity is O ( 1 ) O(1) O(1)

Golang’s map implementation principle

The map in golang is a pointer, which occupies 8 bytes and points to the hmap structure. The bottom layer of the map is stored based on the hash table + linked list address method.
Features of map:

  1. Keys cannot be repeated
  2. The key must be hashable (hashable ones are: int, bool, float, string, array)
  3. disorder
    Insert image description here

Implementation details:

  1. Underlying Hash Table: Map in Go uses a hash table to store key-value pairs. A hash table is an array, and each element is called a bucket. Each bucket can store multiple key-value pairs.
  2. Hash function: map uses a hash function to map keys to indexes in a hash table. The goal of a hash function is to spread the keys evenly across the hash table so that the corresponding bucket can be quickly located during a lookup.
  3. Conflict handling: Due to the limitations of hash functions, conflicts may occur, that is, multiple keys map to the same bucket. To handle collisions, each bucket is actually a linked list or binary search tree that stores key-value pairs with the same hash value.
  4. Automatic expansion: Go’s map has automatic expansion function. When the number of key-value pairs in the map approaches the upper limit of the hash table capacity, the map will automatically expand the size of the hash table to maintain performance.
  5. Hash table size: The size of the hash table is usually kept to a power of 2 (2^n) to allow for bit operations to be performed to calculate the index of the bucket. This provides hash table performance.
  6. Unordered: map does not guarantee the order of key-value pairs. Traversing the elements in the map may not be in insertion order, but in hash order by key.
  7. Type of value: The value in map can be any data type, including built-in types, custom types, slices, structures, etc. Keys must be of a type that can be compared for equality, such as integers, strings, pointers, etc.
  8. Zero value: The zero value of an uninitialized map is nil, indicating an empty map.

Summary: Map in Go uses a hash table to implement key-value storage. It has an automatic expansion function and can quickly find and insert key-value pairs. But map does not guarantee the order of elements. In a multi-threaded environment, map is not thread-safe and requires appropriate synchronization operations to avoid race conditions. If thread-safe mapping is required, the sycn.Map type can be used.

Why is golang's map unordered?

  1. Hash table storage method: Map uses a hash table internally to store key-value pairs. A hash table is usually an array of buckets, and each bucket may contain multiple key-value pairs. The storage method of the hash table determines that the storage location of its elements is not determined according to the order of insertion.
  2. Hash collision: In a hash table, multiple keys may be mapped to the same bucket, which is called a hash collision. When a hash collision occurs, map uses a linked list or other data structure to store these key-value pairs with the same hash value. Due to the existence of hash collisions, the order of elements in the hash table is no longer predictable.
  3. Performance Optimization: In order to maintain high performance, the internal implementation of map will dynamically expand and contract the number of buckets. This means that the order of the buckets may be rearranged internally, affecting the order of the elements.
  4. Unordered specification: The Go language specification clearly states that map is unordered, which means that programmers should not rely on the order of elements in the map. This also gives compilers and implementers greater flexibility to optimize map performance, since the ordering of elements does not need to be maintained.

Although maps are unordered, if you need to access the elements in a map in a specific order, you can do so by storing the keys in a slice and sorting the slice.

The search principle of golang's map

  1. Hash function: When trying to find a value in a map, Go uses a hash function to map the key you provide to the hash table inside the map. The hash function converts the key into an integer, which is called a hash code (hash code).
  2. Bucket selection: The hash table is internally composed of a series of buckets, and each bucket can hold multiple key-value pairs. The hash table will select a specific bucket based on the hash code.
  3. Lookup operation: Once the hash table determines the bucket to look for, it looks for key-value pairs with the same hash code in this bucket. This typically involves traversing the elements in the bucket to find a matching key.
  4. Comparison of keys: Within the bucket, the hash table compares the key looked up with the key stored in the bucket. This is achieved through key equality judgment. If a matching key is found, the hash table returns the value associated with that key.
  5. Return results: If a matching key is found, the map search operation returns the value associated with the key. If no matching key is found, the search operation returns a zero value of the value type.

Why is the load factor of golang's map 6.5?

What is load factor?

  • Load factor (load factor) is the core indicator used to measure the space usage in the current hash table, that is,the average storage capacity of each bucket bucket
    < a i=2>Number of elements.
  • Load factor = number of elements stored in the hash table/number of buckets

The choice of load factor is a trade-off that involves a trade-off between performance and memory usage. Here are some of the reasons behind it:

  1. Reduce collisions: A smaller load factor can reduce the occurrence of hash collisions. A hash collision is when two different keys map to the same bucket. By keeping the load factor relatively small, you can reduce the occurrence of collisions and improve the efficiency of search operations.
  2. Reduced memory usage: A larger load factor can reduce memory usage because the number of buckets in the hash table is relatively small. This is important for memory-constrained environments or applications that require a large number of maps.
  3. Performance Balance: The load factor is chosen toachieve a balance between performance and memory. A smaller load factor may provide better performance but may require more memory, while a larger load factor may reduce memory usage but may result in more hash collisions, thereby reducing performance.

Based on this test result and discussion, Go officials took a relatively moderate value and hard-coded the load factor of the map in Go to 6.5. This is the reason for the selection of 6.5.
This means that in the Go language, when the number of elements stored in the map is greater than or equal to 6.5*the number of buckets, the expansion behavior will be triggered.

How to expand golang's map

Expansion timing: When inserting a new Key into the map, condition detection will be performed. If the following two conditions are met, expansion will be triggered.
Expansion conditions:

  1. Number of load map elements exceeded > 6.5 * number of buckets
  2. Too many overflow buckets
    • When the total number of buckets < 2^15, if the total number of overflow buckets >= the total number of buckets, it is considered that there are too many overflow buckets
    • When the total number of buckets >= 2^15, it is directly compared with 2^15. When the total number of overflow buckets >= 2^15, it is considered that there are too many overflow buckets.

Regarding condition 2, it is actually a supplement to condition 1. Because when the load factor is relatively small, the search and insertion efficiency of map may be very low, and this situation cannot be identified in point 1.

On the surface, the load factor is now relatively small, that is, the total number of elements in the map is small, but the number of buckets is large (the number of buckets actually allocated is large, including a large number of overflow buckets). For example, continuous additions and deletions will cause the number of overflow buckets to increase, but the load factor is not high and cannot reach the critical value of point 1, so capacity expansion cannot be triggered to alleviate this situation. This will cause the bucket usage to be low, the values ​​to be stored relatively sparsely, and the search and insertion efficiency to become very low, so there is a second expansion condition.

Transfer mechanism:

  1. Double expansion: For condition 1, create a new buckets array. The new buckets size is twice the original size, and then the old buckets data
    is moved to the new buckets
  2. Equal expansion: For condition 2, the capacity is not expanded, the number of buckets remains unchanged, and the relocation operation similar to double expansion is performed again,
    The loose key-value pairs Rearrange once so that the keys in the same bucket are arranged more closely, saving space, improving bucket utilization
    and ensuring faster access. This method is called equal expansion.

golang's sync.Map

sync.Map is a thread-safe key-value storage data structure provided in the Go language standard library, which is used to safely store and retrieve key-value pairs in a concurrent environment. sync.Map was introduced in Go 1.9.

The main features and principles of sync.Map are as follows:

  1. Thread safety: sync.Map is thread-safe and key-value pairs can be read and modified in multiple Goroutines at the same time without additional lock operations.
  2. No locks: The implementation of sync.Map uses an elegant mechanism that does not use locks on each access, but uses a fine-grained Locking strategy, spread the data into multiple buckets, and use independent locks for each bucket. This reduces contention and lock contention, improving performance.
  3. Atomic operations: sync.Map uses atomic operations to ensure concurrency safety, which means that in a concurrent environment, multiple Goroutines can safely access and modify sync.Map data without requiring explicit locking.
  4. No initialization required: Unlike ordinary map, sync.Map does not need to be initialized. You can directly create a new sync.Map and start using it.
  5. Load and Store operations: sync.Map provides Load and Store methods for loading and storing key-value pairs, as well as other methods for access and modification operations. These operations are thread-safe.
  6. No need to copy: In a normal map, if you need to share a map between multiple Goroutines, you usually need to use a lock to protect it, or create one for each Goroutine independent copy. And sync.Map does not require these complex synchronization operations.

Note: sync.Map is not suitable for all scenarios, and its performance is usually not as good as a pure map, especially when there is only a single Goroutine access. Therefore, you should consider using sync.Map only when you need to share data between multiple Goroutines, or when high concurrency performance is required.

golang's sync.Map supports concurrent reading and writing, adopts a "space for time" mechanism, and redundant two data structures, namely: read and dirty

type Map struct {
    
    
	mu Mutex  //互斥锁
	read atomic.Value  //无锁化读,包含两个字段:m map[interface}l*entry数据和amended bool标识只读map是否缺失数据
	dirty map[interface}*entry //存在锁的读写
	misses int //无锁化读的缺失次数
}

type entry struct{
    
    
	p unsafe.Pointer
}

The value in kv is stored in the form of unsafe.Pointer and linked through the entry.p pointer.

There are three situations where entry.p points:

1. 存活态:正常指向元素。即 key-entry 对 仍未删除。
2. 软删除态:指向 nil。read map 和 dirty map 底层的 map 结构仍存在 key-entry 对,但是逻辑少那个该 key-entry 对已经被删除,因此无法被用户查询到。
3. 硬删除态:指向固定的全局变量 expunged。dirty map 中已不存在该 key-entry 对。

The m of lock-free reading is a subset of dirty, and the amended flag is true to represent missing data. At this time, dirty is read again, and misses + 1. When the misses reaches a certain threshold, dirty is synchronized to m of read.
Insert image description here

Compare golang's sync.Map and original map + lock to achieve concurrency

Compared with the original map+RWLock way of implementing concurrency, the impact of locking on performance is reduced. It does some optimizations:

  1. The read map can be accessed without locking, and the read map will be operated first. If only operating the read map can meet the requirements, then there is no need to operate the write map (dirty), so in some specific scenarios, the frequency of lock competition will be much higher. Far smaller than the implementation of map+RWLock.
  2. Advantages: Suitable for scenarios where there is a lot of reading and a little writing
  3. Disadvantages: Scenarios with a lot of writing will cause the read map cache to become invalid, requiring locking, resulting in more conflicts and a sharp decline in performance.

Does golang handle nil slices and empty slices the same?

slice1 := make([]int, 0)
slice2 := []int{
    
    }
var slice3 []int
slice4 := new([]int)

if slice1 == nil {
    
    
	fmt.Println("slice1 is nil")
}
if slice2 == nil {
    
    
	fmt.Println("slice2 is nil")
}
if slice3 == nil {
    
    
	fmt.Println("slice3 is nil")
}
if *slice4 == nil {
    
    
	fmt.Println("*slice4 is nil")
}

output

slice3 is nil
*slice4 is nil

explain:

  1. slice1 is an empty slice created by the make function. It has space allocated by the underlying array, but its length is 0. Therefore, slice1 is not nil, it is a non-nil slice.
  2. slice2 is an empty slice created using the slice literal. It is an empty slice, but not a nil slice. Like slice1, it is not nil.
  3. slice3 is a slice that is declared but has no underlying array allocated. This is a nil slice because it has no underlying array allocation. Therefore, the condition if slice3 == nil holds.
  4. slice4 is a pointer to a nil slice. Although the value of slice4 is nil, it is actually a pointer to a nil slice, not the nil slice itself. Therefore, the condition if *slice4 == nil holds because *slice4 represents a nil slice.

Summarize:

  • A nil slice means that the slice itself is nil, that is, the underlying array has not been allocated.
  • An empty slice means that the slice is not nil, but has length 0, and it has an underlying array allocation.

What is golang's memory model?

The memory model of the Go language is aconcurrent programming model whichprovides some rules and Guaranteed so that multiple Goroutines can safely access and modify shared memory. Go's memory model has the following main features:

  1. Sequential consistency: The memory model of the Go language provides a guarantee of sequential consistency. This means that operations within one Goroutine will not be reordered, and those operations will be seen in the same order in other Goroutines. This helps developers understand the behavior of the program because the order of operations is predictable across different Goroutines.
  2. Synchronization primitives: Go provides some synchronization primitives, such asmutex lock (sync.Mutex), read-write lock (sync .RWMutex), channel (chan), etc., used to synchronize data access between different Goroutines. These synchronization primitives allow developers to explicitly define critical sections to avoid race conditions and data races.
  3. Atomic operation: Go providesatomic operation function, such as atomic.AddInt32, atomic.LoadInt64 etc., used to perform atomic operations between multiple Goroutines. These atomic operations allow shared variables to be updated safely without using locks.
  4. Channel communication: Go's channel is a built-in mechanism for communication between Goroutines. Channels provide a synchronization method to ensure that sending and receiving operations are performed in a certain order. Channel communication is an important tool for concurrent programming in Go, helping to coordinate operations between different Goroutines.
  5. Memory barriers: The Go language compiler and runtime system automatically insert memory barriers to ensure that memory operations occur in the expected order. This helps avoid unexpected behavior caused by compiler and processor optimizations.

Why do too many small objects cause GC pressure in golang's memory model?

In the memory management of Go language, small objects will increase the pressure of garbage collection (GC), mainly for the following reasons:

  1. Memory allocation and recycling costs: The cost of allocating and recycling small objects is higher. The garbage collector needs to scan and recycle small objects frequently, which results inmore GC pause times. For small objects, the additional overhead of memory allocation and recycling is relatively large.
  2. Fragmentation: Frequent allocation and recycling of small objects may lead toheap memory fragmentation. When there are a large number of small fragments in the heap memory, allocating large objects may become more complex, requiring more memory copy and merge operations.
  3. GC cycle: The garbage collector triggers GC cycles based on heap memory usage. If the heap memory contains a large number of small objects, the garbage collector may need to run more frequently becausethe allocation and recycling of small objects increases the frequency of garbage collection.

In order to reduce the pressure of small objects on GC, you can consider the following strategies:

  1. Object Pool: By maintaining an object pool, allocated objects can be reused instead of frequently creating and destroying objects.
  2. Use appropriate data structures to avoid unnecessary small objects: Consider whether you can use larger data structures or merge multiple small objects into one large object to reduce the number of small objects. The number of objects.
  3. Reduce unnecessary copying: Avoid unnecessary data copying operations, especially in large-scale data processing.
  4. Life cycle management: Carefully manage the life cycle of objects to ensure that they can be released in time when they are no longer needed.
  5. Use pointers: When you need to transfer large objects, you can use pointers instead of value transfers to reduce object copying.
  6. Performance Analysis and Optimization: Use performance analysis tools to identify small objects that are frequently allocated and recycled in your application, and then optimize accordingly.
  7. Adjust GC parameters: According to the actual situation of the application, you can adjust the parameters of GC, such as the threshold and frequency of GC, to balance performance and memory overhead.

Implementation principle of Channel in golang

I think Channel is essentially a thread-safe queue used for communication between coroutines.
Internally ensures the atomicity of queue operations through locks.
Insert image description here

Is Channel synchronous or asynchronous?

Channel is performed asynchronously, and channel has three states:

  1. nil, uninitialized state, only declared, or manually assigned to nil
  2. active, normal channel, readable or writable
  3. closed, closed. Do not mistakenly think that after closing the channel, the value of the channel is nil.
    Insert image description here

Channel deadlock scenario

[GitHub application example]

  1. Non-cached Channel is write-only and not read-only
    func deadlock1() {
          
          
    	ch := make(chan int)
    	ch <- 3 //这里会发生一直阻塞的情况,执行不到下一句
    }
    
  2. Non-cached Channel reads follow writes
    func deadlock2() {
          
          
    	ch := make(chan int)
    	ch <- 3 //这里会发生一直阻塞的情况,执行不到下一句
    	num := <- ch
    	fmt.Println("num=", num)
    }
    
  3. Cache Channel write exceeds buffer size
    func deadlock3() {
          
          
    	ch := make(chan int, 3)
    	ch <- 3
    	ch <- 4
    	ch <- 5
    	ch <- 6	//这里会发生一直阻塞的情况
    }
    
  4. empty reading
    func deadlock4() {
          
          
    	ch := make(chan int)
    	fmt.Println(<- ch)
    }
    
  5. Multiple coroutines wait for each other
    func deadlock5() {
          
          
    	ch1 := make(chan int)
    	ch2 := make(chan int) //互相等对方造成死锁
    	go func() {
          
          
    		for {
          
          
    			select {
          
          
    			case num := <- ch1:
    				fmt.Println("num=", num)
    				ch2 <- 100
    			}
    		}
    	}()	
    
    	for {
          
          
    		select {
          
          
    		case num := <- ch2:
    			fmt.Println("num=", num)
    			ch1 <- 300
    		}
    	}
    }
    

What are the atomic operations in golang?

In the Go language, atomic operations are used to ensure that multiple Goroutines can safely read and write shared variables to avoid data races and concurrency issues. The Go language provides some atomic operation functions, the most commonly used of which are the functions in the sync/atomic package.

scenes to be used:

  • When we want to modify a variable concurrently and safely, in addition to using the officially provided mutex, we can also use the atomic operation of the sync/atomic package. It can ensure that the reading or modification of variables will not be affected by other coroutines.
  • atomicThe atomic operations provided by the package can ensure that only one goroutine operates on variables at any time. Making good use of atomic can avoid a large number of lock operations in the program.

Common operations:

  • Increase or decrease Add
  • Load Load
  • Compare and swap CompareAndSwap
  • Swap
  • Store

The object of atomic operation is an address. You need to pass the address of the addressable variable as a parameter to the method instead of passing the value of the variable to the method. These operations are introduced below:

  • Add and subtract operations: This type of operation is prefixed with Add. Atomically adds the specified integer value to another integer value.

    func AddInt32(addr *int32,delta int32)(new int32)
    func AddInt64(addr *int64, delta int64)(new int64)
    func AddUint32(addr *uint32, delta uint32)(new uint32)
    func AddUint64(addr *uint64,delta uint64)(new uint64)
    func AddUintptr(addr *uintptr,delta uintptr)(new uintptr)
    
    func add(addr *int64, delta int64) {
          
          
    	atomic.AddInt64(addr, delta)//加操作
    	fmt.Println("add opts: ",*addr)
    }
    
  • Load operations: Such operations are prefixed with Load. Atomically loads the specified 32-bit or 64-bit integer value.

    func LoadInt32(addr *int32)(val int32)
    func LoadInt64(addr *int64)(val int64)
    func LoadPointer(addr *unsafe.Pointer) (val unsafe.Pointer)
    func LoadUint32(addr *uint32)(val uint32)
    func LoadUint64(addr *uint64) (val uint64)
    func LoadUintptr(addr *uintptr) (val uintptr)
    //特殊类型:Value类型,常用于配置变更
    func (v *Value) Load() (x interface{
          
          }){
          
          } 
    
  • Compare and swap: The prefix of this type of operation is CompareAndSwap, which is referred to as CAS and can be used to implement optimistic locking. Atomically compares the specified integer value to the expected value and swaps them if they are equal.

    func CompareAndSwapInt32(addr *int32, old, new int32)(swapped bool)
    func CompareAndSwapInt64(addr *int64, old, new int64) (swapped bool)
    func CompareAndSwapPointer(addr *unsafe.Pointer, old, new unsafe.Pointer) (swapped bool)
    func CompareAndSwapUint32(addr *uint32, old, new uint32)(swapped bool)
    func CompareAndSwapUint64(addr *uint64, old, new uint64) (swapped bool)
    func CompareAndSwapUintptr(addr *uintptr, old, new uintptr) (swapped bool)
    

    This operation first ensures that the value of the variable has not been changed before performing the exchange, that is, the value recorded by the parameter old is still maintained. The exchange operation is only performed when this premise is met.
    The approach of CAS is similar to the optimistic locking mechanism common when operating databases.
    Note that when there are a large number of goroutines reading and writing variables, the CAS operation may not be successful. In this case, you can use a for loop to try multiple times.

  • other

    atomic.StoreInt32 / atomic.StoreInt64: 用于原子地存储指定的 32 位或 64 位整数值。
    atomic.SwapInt32 / atomic.SwapInt64: 用于原子地交换指定的整数值。
    atomic.LoadPointer / atomic.StorePointer: 用于原子地加载和存储指针。
    atomic.CompareAndSwapPointer: 用于原子地比较指定的指针和期望值,并在它们相等时进行交换。
    atomic.AddUint32 / atomic.AddUint64: 用于原子地将指定的无符号整数值与另一个无符号整数值相加。
    atomic.AddInt / atomic.AddUint: 用于原子地将指定的整数值与另一个整数值相加,其中整数的大小是平台相关的。
    

Understand what optimistic locking and pessimistic locking are

  • Pessimistic lock:
    • The basic idea of ​​pessimistic locking is to acquire a lock before accessing shared resources (such as database records, memory data) to prevent other threads from modifying the resource at the same time.
    • When a thread acquires a pessimistic lock, other threads must wait until the lock is released. This results in lower concurrency because only one thread can access the shared resource.
    • Pessimistic locks are usually implemented using mutexes or database row-level locks. These locks ensure resource exclusivity but can lead to performance bottlenecks and deadlock problems.
  • Optimistic locking:
    • The basic idea of ​​optimistic locking is to not block other threads before accessing a shared resource, but to try to perform the operation first, and then when the operation is completed, check whether other threads have modified the resource at the same time.
    • The implementation of optimistic locking usually relies on version numbers or timestamps. Each resource is associated with a version number or timestamp, and when a thread attempts to modify the resource, it checks whether the resource's version number or timestamp is still the value it read when it started the operation.
    • If the version number or timestamp matches, the operation continues; otherwise, the operation fails and is retried.
    • Optimistic locking is usually used to handle scenarios where there are many reads and few writes to improve concurrency and reduce lock competition.

Optimistic locking is suitable for situations where there is more reading and less writing, which can improve concurrency, but conflicts need to be handled;
Pessimistic locking is suitable for situations where there is more writing and less reading or where forced resource exclusive ownership is required , but may cause performance bottlenecks.

The difference between atomic operations and locks

  1. Atomic operations are supported by the underlying hardware, and locks are based on atomic operations + semaphores. If the same function is implemented, the former is usually more efficient
  2. Atomic operations are mutually exclusive operations of a single instruction; mutex locks/read-write locks are a data structure that can complete mutually exclusive operations in critical sections (multiple instructions) and expand the scope of atomic operations.
  3. Atomic operations are lock-free operations and belong to optimistic locking; when talking about locks, they generally belong to pessimistic locking.
  4. Atomic operations exist at various instruction/language levels, such as "atomic operations at the machine instruction level", "atomic operations at the assembly instruction level", "atomic operations at the Go language level", etc.
  5. Locks also exist in various instruction/language levels, such as "machine instruction level locks", "assembly instruction level locks", "Go language level locks", etc.

Implementation principle of goroutine

Goroutine can be understood as a coroutine (lightweight thread) of the Go language. It is the basis for Go to support high concurrency. It is a user-mode thread and is managed by Goruntime rather than the operating system.

Underlying data structure:

type g struct {
    
    
	goid    int64   //唯一的goroutine的ID
	sched   gobuf   //goroutine切换时,用于保存g的上下文
	stack   stack   //栈
	gopc            //pc of go statement that created this goroutine
	startpc uintptr //pc of goroutine function
	// ...
}
type gobuf struct {
    
    
	sp  uintptr  //栈指针位置
	pc  uintptr  //运行到的程序位置
	g   guintptr //指向goroutine
	ret uintptr  //保存系统调用的返回值
	// ...
}
type stack struct {
    
    
	lo uintptr //栈的下界内存地址
	hi uintptr //栈的上界内存地址
}

The status flow of goroutine:

Insert image description here
Insert image description here

  1. Create: The go keyword will call the underlying functionruntime.newproc() to create onegoroutine. After calling the function, goroutine will be set to runnable status

    • The created goroutine will create a new stack space of its own, and at the same time maintain the stack address and program counter information in G's sched
      .
    • After each G is created, it will be put into the local queue first. If the local queue is full, it will be put into the global queue
      .
  2. Run: goroutine itself is just a data structure, what really makes goroutine run is thescheduler . Go implements a user-mode scheduler (GMP model). This scheduler makes full use of the multi-core characteristics of modern computers and allows multiple goroutines to run at the same time. At the same time, goroutine is designed to be very lightweight, and the cost of scheduling and context switching is relatively small.
    Insert image description hereScheduling timing:

    1. A new coroutine is started and the coroutine is executed.
    2. Blocking system calls, such as file io and network io
    3. Blocking operations such as channel and mutex
    4. time.sleep
    5. After garbage collection
    6. Actively call runtime.Gosched()
    7. Running for too long or system call taking too long, etc.

    When each M starts executing G in P's local queue, the goroutine will be set to the running state.

    If an M completes the execution of all G in the local queue, it will then go to the global queue to get G. It should be noted here that every time it goes to the global queue to get G, it needs to be locked to prevent the same task from being used multiple times. times taken. Number of Gs taken from the global queue: N= min(len(GRQ)/GOMAXPROCS 1,len(GRQ/2)) (according to GOMAXPROCS load balancing)

    If the global queue is all taken, and there are no more Gs for current M to execute, it will go to other local queues to get tasks. This mechanism is called The work stealing mechanism will take away half of the tasks each time and round them down. For example, if there are 3 tasks in another P, that half will be one task. Number of G stolen from other P local queues: N=len(LRQ)/2 (equally divided)

    When the global queue is empty and M cannot get tasks from other Рs, it will let itself enter the spin state and wait for new G to come in. At most, there will only be GOMAXPROCS M in the spin state. Spin of too many M will waste CPU resources.

  3. Blocking: Channel read and write operations, waiting for locks, waiting for network data, system calls, etc. may be blocked, and the underlying function runtime.gopark() will be called, which will cause Remove the CPU time slice, let the scheduler arrange for other waiting tasks to run, and resume execution from this position at some time next time. After calling this function, the goroutine will be set to the waiting state.

  4. Wake up: The goroutine in the waiting state will be awakened after calling the runtime.goready() function. The awakened goroutine will be put back into the context P corresponding to M In runqueue, waiting to be scheduled. After calling this function, the goroutine will be set to the runnable state.

  5. Exit: When the goroutine is executed, the underlying function runtime.Goexit() will be called. After calling this function, the goroutine will be set todead Status.

Leakage of goroutine

Reason for leak:

  1. Read and write operations such as Channel/mutex in goroutine are always blocked.
  2. The business logic within the goroutine enters an infinite loop, and resources cannot be released.
  3. The business logic within the goroutine enters a long wait, and there are constantly new goroutines entering the wait.

How to check the number of goroutines? How to limit the number of goroutines?

During the development process, if goroutine is not controlled and abused, it may cause the entire service to collapse. For example, exhausting system resources causes the program to crash, or the CPU usage is too high and the system is too busy.

In golang, GOMAXPROCS controls all goroutines that are not blocked and how many threads they can be multiplexed to run on. You can check the number of goroutines through GOMAXPROCS.

In Go, you can use the runtime package to view the number of currently running goroutines and limit the number of goroutines. Here are some related functions and methods:

  1. View the number of currently running goroutines: Use the runtime.NumGoroutine() function to view the number of currently running goroutines. This function returns an integer representing the number of currently active goroutines. For example:

    numGoroutines := runtime.NumGoroutine()
    fmt.Printf("当前活动的 goroutine 数量: %d\n", numGoroutines)
    
  2. Limit the number of goroutines: The Go language itself does not provide a built-in mechanism to limit the number of goroutines. However, it is possible to implement limits on the number of goroutines yourself using channels and coroutines. Here is an example:

    package main
    import (
        "fmt"
        "sync"
    )
    func worker(id int, jobs <-chan int, results chan<- int) {
          
          
        for job := range jobs {
          
          
            // 模拟工作
            fmt.Printf("Worker %d 开始处理 Job %d\n", id, job)
            // 实际工作处理
            results <- job * 2
            fmt.Printf("Worker %d 完成处理 Job %d\n", id, job)
        }
    }
    func main() {
          
          
        numJobs := 10
        numWorkers := 3
    
        jobs := make(chan int, numJobs)
        results := make(chan int, numJobs)
    
        var wg sync.WaitGroup
    
        // 启动 goroutine
        for i := 1; i <= numWorkers; i++ {
          
          
            wg.Add(1)
            go func(workerID int) {
          
          
                defer wg.Done()
                worker(workerID, jobs, results)
            }(i)
        }
    
        // 提交任务
        for i := 1; i <= numJobs; i++ {
          
          
            jobs <- i
        }
        close(jobs)
    
        // 收集结果
        go func() {
          
          
            wg.Wait()
            close(results)
        }()
    
        // 处理结果
        for result := range results {
          
          
            fmt.Printf("收到结果: %d\n", result)
        }
    }
    

    This is achieved by using two pass jobs and results and a wait group wg.

    1. jobs channel: This channel is used to pass tasks (jobs) that need to be processed to goroutine. In the example, we create 10 tasks (1 to 10) and send them to the jobs channel.

    2. results channel: This channel is used to transmit the results of processing tasks. Each goroutine sends the results of task processing to the results channel.

    3. wg waiting group: sync.WaitGroup is used to wait for all goroutines to complete their work. We created a WaitGroup variable wg in the main function. Each started goroutine will call wg.Done() after completing its work to notify wg that it has completed. Finally, we call wg.Wait() in a separate coroutine to wait for all goroutines to complete. This ensures that all goroutines are waited for to complete before the program exits.

      This design pattern allows us to limit the number of goroutines executing concurrently. In the example, we start 3 worker goroutines, which receive tasks from the jobs channel, process the tasks and send the results to the results channel. Only when a goroutine completes its task can it receive the next task from the jobs channel. This limits the number of goroutines that can be active at the same time.

What is the difference between goroutine and thread?

  1. A thread can have multiple coroutines
  2. Threads and processes are both synchronization mechanisms, while coroutines are asynchronous
  3. The coroutine can retain the state of the last call. When the process re-enters, it is equivalent to entering the last call state.
  4. Coroutines require threads to host and run, so coroutines cannot replace threads. "Threads are divided CPU resources, and coroutines are organized code processes."

Can golang's struct be compared?

In Go, structures (structs) can perform comparison operations, but there are some restrictions. Structure comparison is doneby field one by one, but there are some rules and restrictions:

  1. Structure comparison must be of the same type of structure, that is, they have the same Field type and field order.
  2. Structures can be compared using the == operator. Two structures are considered equal if all their fields are equal, otherwise they are considered unequal.
  3. If the fields of the structure contain incomparable types (such as slices, maps, functions, etc.), the structure is not comparable, Andan error will occurwhen compiling.
  4. If the structure contains a pointer field, the result of the comparison will depend on what the pointer points to. .
  5. If a structure's fields are compared but they contain incomparable types, the comparison will raise a compile-time error.
package main
import "fmt"

type Person struct {
    
    
    Name string
    Age  int
}

func main() {
    
    
    p1 := Person{
    
    "Alice", 30}
    p2 := Person{
    
    "Bob", 25}
    p3 := Person{
    
    "Alice", 30}

    // 结构体比较
    fmt.Println("p1 == p2:", p1 == p2) // false
    fmt.Println("p1 == p3:", p1 == p3) // true
}

How to expand golang's slice

Before Go 1.18:

  1. If the new capacity cap is greater than twice the old capacity old.cap, the final capacity newcap is the new capacity cap.
  2. If the new capacity, cap, is less than twice the old capacity, old.cap, and the length of the old slice is less than 1024 bytes, the final capacity, newcap, is twice the old capacity, old.cap.
  3. If the new capacity cap is less than twice the old capacity old.cap, but the old slice length is greater than or equal to 1024 bytes, the final capacity newcap will increase 1.25 times starting from the old capacity old.cap in a loop until the new final capacity is greater than or equal to the new capacity. cap.
  4. If the calculated final capacity newcap overflows, the final capacity newcap is the new capacity cap.

After Go 1.18:
Go 1.18 introduces a smarter expansion strategy to reduce the number of memory allocations and copies. The specific strategies are as follows:

  • If the slice's capacity is less than 256 bytes, each expansion will double the capacity.
  • If the capacity of the slice is greater than or equal to 256 bytes, each expansion will increase (old.cap + 3*256) / 4, which is equivalent to 1.25 times the old capacity plus 192 each time.

This updated strategy better balances memory and performance. Therefore, the expansion strategy has undergone some changes after Go 1.18. It is no longer fixed at 2 times or 1.25 times, but is adjusted according to the size of the capacity.

Why does memory leak occur in Go function? How to detect a leak?

Memory leaks usually occur in Go under the following circumstances:

  1. A large number of goroutines are created, but they are not terminated in time, resulting in the memory occupied by them not being released.
  2. Reference cycles in data structures prevent the garbage collector from identifying and reclaiming memory that is no longer in use.
  3. Continuously allocate memory without freeing it, for example, by continually adding elements to a slice or map without removing or cleaning up elements that are no longer used.

For memory leak detection, Go provides some tools to help detect and analyze memory leaks:

  1. pprof: Go's standard library provides the net/http/pprof package, which can be used to generate memory and CPU analysis reports to help diagnose and locate memory leaks.
  2. Gops: Gops is a tool for inspecting and manipulating running Go processes. It provides commands such as viewing memory allocations and the status of garbage collection to help identify memory leaks.

These tools can help you find memory leaks and analyze which parts of the code are causing the leaks so you can fix the problem promptly.

Is it possible that two nils are not equal in golang?

May not be equal. In Go, the interface value consists of two parts: type (Type) and value (Value). An interface value is equal to nil when its type part is nil and its value part is also unset. This is because the zero value of an interface type is nil.

When comparing two interface values, T will be compared first, and then V.
When comparing an interface value with a non-interface value, the non-interface value will first be converted into an interface value and then compared.

func main() {
    
    
	var p *int
	var i interface{
    
    } = p
	fmt.Println(i == p)   // true	接口 i 的值部分包含了指针 p,而接口值的比较是按照 i 的类型和值进行的,所以 i 等于 p。
	fmt.Println(p == nil) // true	指针 p 确实是 nil
	fmt.Println(i == nil) // false	虽然 i 的值部分包含 nil 指针,但由于 i 的类型部分不是 nil,因此整个接口值 i 不等于 nil。这是因为接口值的比较是按照类型和值一起进行的。
}

Memory alignment in Go language

The CPU does not read and write memory byte by byte. On the contrary, the CPU reads the memory block by block. The block size can be 2, 4, 6, 8, 16 bytes, etc. We call the block sizememory access granularity, and the memory access granularity is related to the machine word length.

Alignment rules:

  1. The alignment requirements for member variables of a structure are determined based on the type of each member variable. The alignment value of each member variable must be the smaller of the compiler default alignment length and the size of the member variable type.
    • For example, if the default alignment length is 4 bytes, and the type size of a member variable is 2 bytes, then the alignment value of the member variable will be 2 bytes. This ensures that member variables are arranged according to their type's natural alignment.
  2. The alignment value of the structure itself is determined based on the largest size among the types of all member variables of the structure.
    • If the maximum size in a structure's member variable type is less than the compiler's default alignment length, then the structure's alignment value will be The size of the largest member variable.
    • If the maximum size is greater than the compiler default alignment length, then the alignment value of the structure will be The compiler's default alignment length.
  3. The default alignment length is usually target platform dependent, for example, it is usually 4 bytes on a 32-bit system and 8 bytes on a 64-bit system.

Alignment rules ensure that a structure and its member variables are arranged in memory in an appropriate manner so that the CPU can access the data efficiently. Alignment rules may vary for different compilers and target platforms.

Can two Interfaces be compared?

  1. Determine whether the types are the same:reflect.TypeOf(a).Kind() == reflect.TypeOf(b).Kind()
  2. Determine whether two Interface{} are equal:reflect.DeepEqual(a, b)
  3. Assign one Interface{} to another Interface{}:reflect.ValueOf(&a).Elem().Set(reflect.ValueOf(b))

What is the difference between %v, %+v, and %#v when printing in golang?

%v: Output only all values
%+v: Output the field name first, then output the value of the field
%#v: First output the structure name value, then output the structure (field name + field value)

package main
import "fmt"

type student struct {
    
    
	id   int32
	name string
}
func main() {
    
    
	a := &student{
    
    id: 1, name: "微客鸟窝"}

	fmt.Printf("a=%v \n", a)  // a=&{1 微客鸟窝}
	fmt.Printf("a=%+v \n", a) // a=&{id:1 name:微客鸟窝}
	fmt.Printf("a=%#v \n", a) // a=&main.student{id:1, name:"微客鸟窝"}
}

What is a rune type?

In Go, there are two types of characters:

  1. The uint8 type, or byte type, represents a character in ASCII code.
  2. The rune type codes a UTF-8 character. When you need to process Chinese or other compound characters, you need to use the rune type. It can represent any Unicode character. The rune type is equivalent to the int32 type.

Does an empty struct{} take up space? What is the use of?

Empty structure struct{} instances do not occupy any memory space.

use:

  1. When using map as a set, you can define the value type as an empty structure and use it only as a placeholder.
  2. A channel that does not send data. There is no need to send any data when using a channel. It is only used to notify the sub-goroutine to perform tasks, or only to control the concurrency of the coroutine. [GitHub application example]
  3. The structure only contains methods and does not contain any fields.

The difference between golang value receiver and pointer receiver

The difference between golang functions and methods is that methods have a receiver.

If the receiver of the method is a pointer type, regardless of whether the caller is an object or an object pointer, the object itself is modified, which will affect the caller.
If the receiver of the method is a value type, regardless of whether the caller is an object or an object pointer, what is modified is a copy of the object and does not affect the caller.

package main
import "fmt"

type Person struct {
    
    
	age int
}

func (p *Person) IncrAge1() {
    
    
	p.age += 1
}

func (p Person) IncrAge2() {
    
    
	p.age += 1
}

func (p Person) GetAge() int {
    
    
	return p.age
}

func main() {
    
    
	p := Person{
    
    
		22,
	}
	p.IncrAge1()
	fmt.Println(p.GetAge()) //23
	p.IncrAge2()
	fmt.Println(p.GetAge()) //23
	
	p2 := &Person{
    
    
		age: 22,
	}
	p2.IncrAge1()
	fmt.Println(p2.GetAge()) //23
	p2.IncrAge2()
	fmt.Println(p2.GetAge()) //23
}

Common reasons for using pointer types as receivers of methods:

  1. Use pointer types to modify the caller's value.
  2. Using a pointer type avoids copying the value each time the method is called, which is more efficient when the value type is a large structure.

Implementation principle of defer keyword

In Go, every function (or method) has a data structure that contains a bunch of pointers to defer functions. When you use the defer keyword to defer the execution of a function, the Go compiler will wrap the function to be postponed into a closure and store it in the function in the defer linked list.

This defer linked list is like a stack, and the defer function added later will be placed at the front of the linked list, which means that the defer function added last will be executed first. When the function returns normally, Go will execute these defer functions in last-in-first-out (LIFO) order to ensure the release and cleanup of resources.

If a panic is encountered in a function, Go will immediately stop the normal execution path, but keep the defer list. Afterwards, the functions in the defer linked list will be executed in last-in-first-out order, which can be used to handle panics, restore program state, or perform resource cleanup and other operations.

So,The implementation of defer is basically to encapsulate the functions to be postponed into closures and execute them in last-in-first-out order to ensure that when the function returns or panics perform specific operations.

  • When the panic is not recovered, when the panic thrown reaches the top-level function of the current goroutine, the top-level program terminates abnormally directly.
    package main
    import "fmt"
    
    func F() {
          
          
    	defer func() {
          
          
    		fmt.Println("b")
    	}()
    	panic("a")
    }
    
    func main() {
          
          
    	defer func() {
          
          
    		fmt.Println("c")
    	}()
    	//子函数抛出的panic没有recover时,上层函数时,程序直接异常终止
    	F()
    	fmt.Println("继续执行")
    }
    
    Insert image description here
  • When the panic is recovered, the top-level function of the current goroutine executes normally.
    package main
    import "fmt"
    
    func F() {
          
          
    	defer func() {
          
          
    		if err := recover(); err != nil {
          
          
    			fmt.Println("捕获异常", err)
    		}
    		fmt.Println("b")
    	}()
    	panic("a")
    }
    
    func main() {
          
          
    	defer func() {
          
          
    		fmt.Println("c")
    	}()
    	//子函数抛出的panic没有recover时,上层函数时,程序直接异常终止
    	F()
    	fmt.Println("继续执行")
    }
    
    Insert image description here

The underlying principle of select

select is a control structure in the Go language used to handle concurrent operations. It allows you to choose between multiple channel operations to perform the corresponding operation when data is ready for one of them. The underlying principle of select:

  1. Select contains multiple cases, and each case corresponds to a channel operation (send or receive) or a default operation (executed when no other case meets the conditions).
  2. When select starts executing, it checks each case to see which operation can be performed immediately (i.e. there is data in the channel to receive, or there is space to send data).
  3. If there are multiple cases that can be executed immediately, the Go language runtime system will randomly select one to execute to ensure fairness.
  4. If no case can be executed immediately, select will wait until at least one case satisfies the condition. While waiting, other coroutines can continue executing.
  5. Once there is a case that meets the conditions, select will perform the operation corresponding to the case and then continue downward.
  6. If multiple cases meet the conditions at the same time, select will still only execute one, and which case is executed randomly.
  7. If there is a default operation, it will be executed when no other case meets the condition. This is a general backup operation.

The underlying principle of select is to check multiple channel operations and select an operation that can be performed immediately or wait for at least one operation to be ready.

gRPC

gRPC is a remote procedure call based on go. The goal of the RPC framework is to make remote service calls simpler and more transparent. The RPC framework is responsible for shielding the underlying transmission method (TCP or UDP), serialization method (XML/Json/binary) and communication details. Service callers can call remote service providers just like calling local interfaces, without needing to care about the underlying communication details and calling process.
Insert image description here

reflection

Reflection refers to the ability of a computer program to access, detect, and modify its own state or behavior. When the program is compiled, the variables are converted into memory addresses, and the variable names are not written into the executable part by the compiler. When running a program, the program cannot obtain information about itself.
Languages ​​that support reflection can integrate the reflection information of variables, such as field names, type information, structure information, etc., into the executable file during program compilation, and provide an interface for the program to access the reflection information. , so that reflection information of types can be obtained during program runtime, and the ability to modify them can be obtained.

Two disadvantages of reflection:

  1. The code is difficult to read and maintain, and the container panics online.
  2. Performance is poor, 1 to 2 orders of magnitude slower than normal code

The two most important concepts of reflection in golang are Type and Value.

  • Type is used to obtain type-related information, just like you can see the label of a box and know what item it contains. (such as the length of slice, members of struct, number of parameters of function)
  • Value is used to get and modify the value of the original data, just like you can open a box and view and modify the contents. (For example, modifying elements in slices and maps, modifying member variables of struct).
    Insert image description here

Comparison of string concatenation methods in golang

For string concatenation, it is recommended to use strings.Builder or bytes.Buffer, which have higher performance because they are The buffer implemented based on []byte can avoid unnecessary memory allocation and copying. At the same time, these two types also provide more string operation methods.

  • Use+ to splice strings: Because golang’s strings are static, they will be spliced ​​every time+ Reallocate a memory space to store the two strings that are added.

    • Strings are essentially immutable in golang, so each time you use the + operator to concatenate strings, a new string will be created. string, and copies the contents of the original string and the string to be concatenated to the new memory location.
    • +The operator is equivalent to using the append function to append a fragment of bytes to a slice and then convert the slice to a string.
  • Usefmt.Sprintf to splice strings: Mainly using reflection.

    • fmt.SprintfIt concatenates various data types by formatting strings and returns a new string.
    • fmt.Sprintf internally uses reflection to dynamically convert various data types into strings and then splice them together.
    • Due to the use of reflection,fmt.Sprintf can handle a variety of different data types, but this also introduces some performance overhead because reflection checks and converts the data type at runtime.
  • Use strings.Builder: (available in golang 1.10 and later versions)

    • strings.Builder is a string buffer used to build strings.

    • It provides methods to append strings, such asWriteString, Write, etc., and the String method is used to get The final string.

    • When using strings.Builder, strings can be appended continuously without causing large memory allocations. It maintains a []byte
      internally and dynamically adjusts the size to accommodate additional content.

    • Example:

      var builder strings.Builder
      builder.WriteString("hello, ")
      builder.WriteString("world!")
      result := builder.String()	//获取最终字符串
      
    • The addr field is mainly used for copycheck, and the buf field is a byte type slice, which is used to store string content. The writeString() method provided is just like appending data to the slice buf.

    • The application for []byte is doubled. For example, if the initial size is 0, when a string of size 10 byte is written for the first time, memory of size 16 byte will be requested (exactly larger than 10 byte's exponent of 2). When writing 10 bytes for the second time, if the memory is not enough, 32 bytes of memory will be requested. If the memory is sufficient for the third time, no new memory will be requested, and so on.
      The String method provided is to convert []byte to string type. In order to avoid the problem of memory copy, forced conversion is used to avoid memory copy.

      type Builder struct {
              
              
      	 addr *Builder // of receiver, to detect copies by value
           buf  []byte // 1
      }
      
      func (b *Builder) WriteString(s string) (int, error) {
              
              
      	 b.copyCheck()
      	 b.buf = append(b.buf, s...)
      	 return len(s), nil
      }
      
      func (b *Builder) String() string {
              
              
      	 return *(*string)(unsafe.Pointer(&b.buf))
      }
      
  • Use bytes.Buffer:

    • bytes.Buffer is similar to strings.Builder and is also a buffer used to build strings.
    • It provides similar methods to append data, such as WritrString, Write, etc.
    • Unlike strings.Builder, bytes.Buffer can be used to handle any binary data, not just strings.
    • Example:
      var buffer bytes.Buffer
      buffer.WriteString("hello, ")
      buffer.WriteString("world!")
      result := buffer.String()	//获取最终字符串
      
    • The bottom layer of strings.Builder and bytes.Buffer is []byte array, butstrings.Builder performance is slightly faster than bytes.Buffer by about 10%. An important difference is that when bytes.Buffer is converted into a string, a new space is allocated to store the generated string variable, while strings.Builder directly converts the underlying []byte into a string type and returns it.

Common character sets

  • ACSII: Use one byte (8 bits) to represent a character, including English letters, numbers and some special symbols. The first bit is 0, which can represent a total of 128 characters, encoded using 7-bit binary numbers.
  • UTF-8: It is an encoding method of the Unicode character set. Using variable length encoding, it is divided into four length areas: 1 byte, 2 bytes, 3 bytes, and 4 bytes. English characters, numbers, etc. occupy 1 byte (8 bits) (compatible with standard ACSII encoding), and Chinese characters occupy 3 bytes.
  • UTF-16: Uses the 16-bit (2-byte) basic encoding unit. Can represent all characters of Unicode.
  • UTF-32: Use a fixed 32-bit (4-byte) encoding unit. Each character is represented by 32 bits. Can represent all characters of Unicode.

The difference between string and []byte

  • Immutability of string type: The string type is designed to be immutable in Go, which means that once a string is created, its contents cannot be changed. This immutability is very useful for concurrent operations because the string can be shared without worrying about other goroutines modifying the string content.
  • String underlying representation:The nature of the string type is also a structure, defined as follows:
    String underlying representation Is a pointer to a byte array, and an integer indicating the length of the string. This underlying byte array contains the actual byte data of the string. stringStruct and slice are still very similar. The str pointer points to the first address of the byte array, and len represents the array length.
    type stringStruct struct {
          
          
        str unsafe.Pointer
        len int
    }
    
  • The difference between string and [] byte: The main difference is immutability. If you need to perform a modification operation on a string, you should first convert the string to []byte, make the required changes, and then convert it to string again.
    • string is immutable
    • []byte can be modified
    str := "hello"
    // string 转为 []byte
    bytes := []byte(str)
    // 修改 []byte
    bytes[0] = 'H' // 可以修改成功
    // []byte 转为 string
    str = string(bytes)
    
    In this process, due to the immutability of strings, a new string is actually created.
  • Why does the string type need to be encapsulated again based on the array?
    Because in the Go language, the string type is designed to be immutable, not only in the Go language, but in other languages ​​​​the string type is also designed to be immutable. The benefits of this design are: In a concurrent scenario, the same string can be used multiple times without locking, ensuring efficient sharing without worrying about security issues.

HTTP and RPC comparison

HTTP: is an application layer protocol commonly used to transmit hypertext documents (such as web pages) between clients and servers. Text-based, human-readable protocol.
RPC: It is a remote procedure call protocol used to implement function calls and data exchange between different computers or processes. Services and data types are usually defined in a programming language specific way.

Same point:

  • Network communication: They are all protocols used to implement network communication, allowing data exchange between different computers or processes.
  • Remote call: Both support remote calls, allowing the client to request operations on the server.
  • Scalability: Both can be used to build distributed systems and microservice architectures to achieve application scalability.

difference:

  • Protocol type: HTTP is a text-based application layer protocol, while RPC typically uses a binary protocol.
  • Communication method: HTTP usually uses a request-response model, while RPC allows remote procedure calls, which are more similar to local function calls.
  • Data format: HTTP data is typically text-based, while RPC uses a more compact binary data format.
  • Cross-language: HTTP can communicate between different programming languages, while RPC usually requires the use of special IDL files to define services and data structures, generate server and client code, and support multiple programming languages.

gRPC vs. RPC

RPC: is a remote procedure call mechanism that allows client programs to call functions or methods on the remote server just like local function calls.
gRPC: is a high-performance, cross-language remote procedure call framework that uses Protocol Buffers as interface description language and binary data serialization format .

Same point:

  • Remote call: All support remote calls, allowing clients to call remote services or functions.
  • Communication paradigm: They all belong to the communication paradigm of remote procedure calling, allowing function calls to be made between different computers or processes.
  • Cross-language: All support multiple programming languages, allowing applications in different programming languages ​​to communicate with each other.

difference:

  • Communication protocol: gRPC usesHTTP/2 as the communication protocol, while the traditional RPC framework can Different transport layer protocols are used, including TCP and UDP, depending on the specific implementation.
  • Serialization Protocol: gRPC uses Protocol Buffers as the default data serialization format, which is An efficient binary format, while traditional RPC can use different data formats, including XML-PRC, JSON-RPC, etc.
  • IDL: gRPC uses Protocol Buffers to define interfaces and data structures, providing strong typing and automatic Code generation features. Traditional RPC usually requires programmers to define their own interfaces and data structures.
  • Performance and Efficiency: gRPC is optimized for performance, providing features such as bidirectional streaming, streaming, header compression, and multiplexing, making it both performance and It is more efficient than traditional RPC.
  • Automatically generate code: gRPC can automatically generate client and server code based on the service definition file, greatly simplifying the development process.
  • Security: gRPC provides security features such as TLS encryption and authentication to ensure communication security sex.

Use of sync.Pool

The essential purpose of sync.Pool is to increase the reuse rate of temporary objects and reduce the GC burden;
The elements saved in sync.Pool have the following characteristics:

  1. The elements in the Pool may be released at any time, and the release strategy is completely managed internally by the runtime.
  2. The element object obtained by Get may have just been created, or it may have been cached before, and users cannot tell the difference.
  3. Pool You have no way of knowing the number of elements in the pool.

Data structure of sync.Pool:

type Pool struct {
    
    
	noCopy noCopy

	local     unsafe.Pointer // local fixed-size per-P pool, actual type is [P]poolLocal
	localSize uintptr        // size of the local array

	victim     unsafe.Pointer // local from previous cycle
	victimSize uintptr        // size of victims array

	// New optionally specifies a function to generate
	// a value when Get would otherwise return nil.
	// It may not be changed concurrently with calls to Get.
	New func() any
}

Apply for object Get
Release object Put

Initialize Pool example

package main
import (
	"fmt"
	"sync"
	"sync/atomic"
	"time"
)

var createNum int32

func createBuffer() interface{
    
    } {
    
    
	atomic.AddInt32(&createNum, 1)
	buffer := make([]byte, 1024)
	return buffer
}

func main() {
    
    
	bufferPool := &sync.Pool{
    
    New: createBuffer}

	workerPool := 1024 * 1024
	var wg sync.WaitGroup
	wg.Add(workerPool)

	for i := 0; i < workerPool; i++ {
    
    
		go func() {
    
    
			defer wg.Done()
			buffer := bufferPool.Get()
			_ = buffer.([]byte)
			// buffer := createBuffer()
			// _ = buffer.([]byte)
			defer bufferPool.Put(buffer)
		}()
	}
	wg.Wait()
	fmt.Printf(" %d buffer objects were create.\n", createNum)
	time.Sleep(3 * time.Second)
}

JWT (JSON Web Token)

jwt structure: JWT consists of three parts, which are separated by dots:

  • Header: Contains the type of token (i.e. JWT) and the signature algorithm used (e.g. HMAC SHA256 or RSA).
  • Payload: Contains a claim, usually including user identity information, expiration time and other information.
  • Signature: Used to verify that the token is complete and has not been tampered with. The signature is obtained by signing the Header and Payload using the signature algorithm specified in the Header.
    Insert image description here

jwt authentication and authorization process:

  1. Generate JWT:
    1. On the server side, when the user successfully logs in, the server will generate a JWT containing user information.
    2. The server uses the secret key to sign the Header and Payload to generate a Signature.
    3. The final generated JWT contains Header, Payload, and Signature, separated by dots.
  2. Transmit JWT: Transmit the JWT to the client, usually in the Authrization field of the HTTP header, or as a query parameter in the URL, or stored in the browser's cookie.
  3. 验证 JWT
    1. After the client receives the JWT, it can store it locally.
    2. Every time a client sends a request to the server, it can put a JWT in the request header so that the server can authenticate the user.
    3. The server uses the key to verify the signature of the JWT. If the signature is valid, the server will parse the Payload in the JWT and obtain the user information.

Determine whether a string is empty in golang

Do not use str != "", use len(str) != 0.
Because in the case of str = nil, nil cannot be directly judged to be equal or unequal to the empty string, but nil can be passed to len() function, the resulting length is 0.

The difference between golang’s cached channels and non-cached channels

Channels in golang are divided into cached channels and non-cache channels, which are different in the following aspects:

  1. Capacity:
    • No cache channel:The capacity is 0, which means the channel does not store any data. Data sending and receiving operations are synchronous, the sender will wait for the receiver, and the receiver will wait for the sender.
    • There is a cache channel:Capacity>0, which means that the channel can store a certain amount of data. Data sending and receiving operations are asynchronous and will only block when the channel is full or empty.
  2. same step:
    • No buffer channel: Data sending and receiving operations aresynchronous. The sending operation will wait for the receiving operation, and the receiving operation will wait for the sending operation.
    • There is a cache channel: data sending and receiving operations areasynchronous. A send operation does not necessarily wait for a receive operation, and a receive operation does not necessarily wait for a send operation.
  3. Block:
    • Unbuffered channel: Send and receive operations cause blocking until the other party is ready.
    • With buffered channels: send and receive operations block only when the channel is full or empty.
  4. Performance:
    • Uncache channel: suitable for synchronization between coroutines. Ensure the safe transfer of data. It is usually used to synchronize communication between coroutines to ensure the synchronization of sending and receiving operations.
    • Cache channel: suitable for asynchronous communication between coroutines, allowing decoupling of certain programs, usually used to control concurrency (such as the current-limited token bucket algorithm) or process event streams.
  5. Usage scene
    • Cache-free channel: used to force synchronization to ensure data synchronization between coroutines. It is often used to wait for responses or results from other coroutines.
    • There are cache channels: used to improve performance, allow abnormal operations of coroutines, reduce blocking, and are often used for data transfer and decoupling between coroutines.

Define a cached channel with capacity = 0. What is the difference from a non-cached channel? Can you write, can you read?

No difference. [GitHub application example]
ch := make(chan int, 0) The channel created is an Unbuffered Channel because the capacity of the channel is explicitly set to 0.

When you create a channel of length 0, it becomes an unbuffered channel, and the effect is the same whether you use make(chan T) or make(chan T, 0) to create it.

This channel is used to force synchronization and ensure the safe transmission of data. It is usually used to wait for responses or results from other coroutines. It ensures that the transfer of data is instant and does not get buffered.

Go’s garbage collection mechanism

GC trigger conditions

GC tuning

GMP dispatch and CSP model

Guess you like

Origin blog.csdn.net/trinityleo5/article/details/133948730