Go language has the most comprehensive interview questions, you can rely on it to get an offer, and it comes with free points download pdf

Click here to download the interview question document for free

From beginner to proficient in go language , click here to download for free.

Article directory

Go basic classes

What is the difference between NEW and MAKE in GO language?

The function of new is to initialize a pointer of a built-in type. The new function is a built-in function. The function definition is:

func new(Type) *Type

⚫Use the new function to allocate space
⚫What is passed to the new function is a type, not a value
⚫The return value is a pointer to the newly allocated address

##What is the role of MAKE in the GO language?
The function of make is to initialize slice, map or chan and then return the reference. The make function is a built-in function, and the function definition is:

func make(Type, size IntegerType) Type

The purpose of the make(T, args) function is different from that of new(T). It is only used to create slices, maps, and channels, and the return type is an instance.

PRINTF(), SPRINTF(), FPRINTF() are all formatted output, what is the difference?

Although these three functions all format output, the output target is different.
Printf is standard output, which is usually the screen, and can also be redirected.
Sprintf() outputs the formatted string to the specified string.
Fprintf() outputs the formatted string to a file.

What is the difference between arrays and slices in the GO language?

Array:

Array fixed length. The array length is part of the array type, so [3]int and [4]int
are two different array types. The array needs to specify the size. If not specified, the size will be automatically calculated based on initialization, and the size cannot be changed. Arrays are passed by value

slice:

Slices can vary in length. Slices are lightweight data structures. The three attributes, pointer, length, and capacity, do not need to be specified. Slices are passed by address (passed by reference) and can be initialized through an array or through the built-in function make(). During initialization len=cap, and then expand the capacity.

How to use value transfer and address transfer (reference transfer) in GO language? What's the difference? for example

1. Value transfer will only copy the value of the parameter and put it into the corresponding function. The addresses of the two variables are different and cannot modify each other.
2. Address passing (passing by reference) will pass the variable itself into the corresponding function, and the value content of the variable can be modified in the function.

What is the difference between arrays and slices when passed in the GO language?

1. Arrays are passed by value
2. Slices are passed by reference

How does the Go language implement slice expansion?

func main() {
    
     
   arr := make([]int, 0) 
   for i := 0; i < 2000; i++ {
    
    
   fmt.Println("len为", len(arr), "cap为", cap(arr)) arr = append(arr, i)
  }
   } 
   我们可以看下结果 依次是 0,1,2,4,8,16,32,64,128,256,512,1024 
   但到了1024之后,
   就变成了 1024,1280,1696,2304 
   每次都是扩容了四分之一左右

What is the execution order of defer in the following code? What are the functions and characteristics of defer?

The function of defer is:

You only need to add the keyword defer before calling an ordinary function or method to complete the syntax required for defer. When the defer statement is executed, the function following the defer will be delayed. The function after the defer will not be executed until the function containing the defer statement is executed, regardless of whether the function containing the defer statement ends normally through return or ends abnormally due to panic. You can execute multiple defer statements in a function, and they are executed in the reverse order of declaration.

Common scenarios of defer:
⚫The defer statement is often used to process pairs of operations, such as opening, closing, connecting, disconnecting, locking, and releasing locks.
⚫Through the defer mechanism, no matter how complex the function logic is, resources can be guaranteed to be released under any execution path.
⚫The defer that releases the resource should directly follow the statement that requests the resource.
Insert image description here
The remaining part is in the link under the interview question document. Click here to download for free.

Go concurrent programming

Several states of MUTEX

⚫mutexLocked — indicates the locking status of the mutex lock;
⚫mutexWoken — indicates being awakened from normal mode;
⚫mutexStarving — the current mutex lock enters the starving state;
⚫waitersCount — the number of Goroutines waiting on the current mutex lock;

MUTEX normal mode and starvation mode

Normal mode (unfair lock)

In normal mode, all goroutines waiting for locks wait in FIFO (first in, first out) order. The awakened goroutine will not directly own the lock, but will compete with the newly requested goroutine for the lock. A newly requested goroutine is more susceptible to preemption: because it is executing on the CPU, there is a high chance that the newly awakened goroutine
20 will fail in the lock competition. In this case, the awakened goroutine will be added to the front of the waiting queue.

Hungry Mode (Fair Lock)

In order to solve the long-tail problem of the waiting goroutine queue, in starvation mode, unlock directly hands the lock to the first goroutine
(team leader) in the waiting queue. At the same time, in starvation mode, the newly incoming goroutine will not participate in grabbing. The lock will not enter the spin state and will directly enter the tail of the waiting queue. This solves the problem of the old goroutine being unable to grab the lock.
Trigger conditions for starvation mode: When a goroutine waits for a lock for more than 1 millisecond, or when there is only one goroutine left in the current queue, Mutex switches to starvation mode.

Summary:
For the two modes, the performance in normal mode is the best. Goroutine can acquire locks multiple times in a row. Starvation mode solves the problem of fairness in acquiring locks, but the performance will decrease. This is actually a balance mode between performance and fairness.

RWMUTEX implementation

It is controlled by recording
the number of read locks in readerCount. When there is a write lock, the number of read locks will be set to a negative number 1<<30. The purpose is to let the newly entered read lock wait for the previous write lock release notification read lock. Similarly, when a write lock is preempted, it will wait for all previous read locks to be released before starting
21 subsequent operations. After the write lock is released, the value will be added to 1<<30, and the newly entered read lock (rw.readerSem) will be notified, and the two will restrict each other.

Notes on RWMUTEX

⚫RWMutex is a single-write multiple-read lock. This lock can add multiple read locks or a write lock.
⚫When the read lock is occupied, writing will be blocked, but reading will not be blocked. Multiple Goroutines can acquire read locks at the same time.
⚫Writing locks will block When other Goroutines (regardless of reading and writing) come in, the entire lock is exclusive to this Goroutine
⚫ Suitable for scenarios where there is more reading and less writing ⚫
The zero value of the RWMutex type variable is an unlocked mutex
⚫ RWMutex cannot be used after it is used for the first time If
the read lock or write lock of the copied ⚫RWMutex is in the unlocked state, the unlocking operation will cause panic. ⚫A
write lock of RWMutex locks the shared resources of the critical section. If the shared resources of the critical section have been locked (read lock or write lock) Lock, the goroutine of this write lock operation will be blocked until it is unlocked
⚫ The read lock of RWMutex should not be used for recursive calls, as it is easier to cause deadlock
⚫ The lock status of RWMutex is not associated with a specific goroutine. One goroutine can RLock (Lock), and another goroutine can RUnlock (Unlock)
⚫After the write lock is unlocked, all goroutines blocked due to the operation of locking the read lock will be awakened, and they can successfully lock the read lock
⚫The read lock is unlocked Finally, on the premise that the Goroutine is not locked by other read locks, all Goroutines blocked due to the operation of locking the write lock, the Goroutine with the longest waiting time will be awakened.

WAITGROUP Usage

A WaitGroup object can wait for a group of coroutines to complete. The usage is:

1. The main coroutine sets the number of worker coroutines by calling wg.Add(delta int), and then creates worker coroutines;
2. After the worker coroutine is executed, wg.Done() must be called;
3.main coroutine The process calls wg.Wait() and is blocked until all worker coroutines are executed and returns.

WAITGROUP implementation principle

⚫WaitGroup mainly maintains two counters, one is the request counter v, and the other is the waiting counter w. The two form a 64-bit value. The request counter occupies the high 32 bits and the waiting counter occupies the low 32 bits.
⚫Every time Add is executed, the request counter v is incremented by 1, the Done method is executed, and the waiting counter is decremented by 1. When v is 0, Wait() is awakened through the semaphore.

What is SYNC.ONCE

⚫Once can be used to perform and only perform an action once, and is often used in the initialization scenario of a singleton object.
⚫Once is often used to initialize singleton resources, or to concurrently access shared resources that only need to be initialized once, or to initialize test resources once during testing.
⚫sync.Once only exposes one method Do. You can call the Do method multiple times, but the f parameter will only be executed when the Do method is called for the first time. Here f is a function with no parameters and no return value.
Insert image description here
The remaining part is in the link under the interview question document. Click here to download for free.

Go Runtime

GOROUTINE definition

Golang supports coroutines at the language level, called Goroutines. All system call operations provided by the Golang standard library (including all synchronous I/O
operations) will yield the CPU to other Goroutines. This makes Goroutine switching management not dependent on the system's threads and processes, nor
the number of CPU cores, but handed over to Golang's runtime for unified scheduling.

What does GMP mean?

G (Goroutine): What we call coroutine is a user-level lightweight thread. The sched in each Goroutine object stores its context information.
M (Machine): Encapsulation of kernel-level threads, the number corresponds to the real number of CPUs (the objects that actually work).
P (Processor): It is the scheduling object of G and M, used to schedule the association between G and M. Its number can be set through GOMAXPROCS(), and the default is the number of cores.

GMP dispatch process

Insert image description here
⚫Each P has a local queue, and the local queue stores the goroutine to be executed (process 2). When the local queue of P bound to M is full, the goroutine will be placed in the global queue (process 2-1)
⚫ Each P is bound to an M. M is the entity that actually executes the goroutine in P (process 3). M obtains G from the local queue in the bound P to execute ⚫ When the local queue of P bound by M is
empty , M will obtain the local queue from the global queue to execute G (process 3.1). When no executable G is obtained from the global queue, M will steal G from the local queue of other P to execute (process 3.2 ), this method of stealing from other P is called work stealing
⚫When G is blocked due to a system call (syscall), M will be blocked. At this time, P will unbind from M, that is, handoff, and look for a new idle M. If there is no The idle M will create a new M (process 5.1)
⚫When G is blocked due to channel or network I/O, M will not be blocked, and M will look for other runnable Gs; when the blocked G is restored, it will re-enter the runnable and enter P Queue waiting for execution

Three-color marking principle

If we first look at a picture, we will probably have a general understanding of the three-color notation method:
Insert image description here
Principle:
First put all objects into the white collection
⚫ Traverse the objects starting from the root node, and the white objects traversed will start from white Put the collection into the gray collection
⚫ Traverse the objects in the gray collection, put the objects in the white collection referenced by the gray objects into the gray collection, and put the traversed objects in the gray collection into the black collection ⚫ Loop
steps 3. Know that there are no objects in the gray collection⚫After
step 4, the objects in the white collection are unreachable objects, that is, garbage, and are recycled

GC trigger timing

Active trigger: call runtime.GC Passive trigger: use system monitoring, the trigger condition is controlled by the runtime.forcegcperiod variable, the default is 2
minutes. When no GC occurs for more than two minutes, GC is forcibly triggered. Using the pacing algorithm, the core idea is to control the proportion of memory growth. For example, Go's GC
is a proportional GC. The heap size at the end of the next GC is proportional to the surviving heap size of the previous GC.

What is the GC process in GO language?

The Go1.14 version takes STW as the boundary and can divide GC into five stages:
GCMark mark preparation stage, preparation for concurrent marking, start write barrier
STWGCMark scan mark stage, executed concurrently with the assignor, write barrier turns on concurrent GCMarkTermination mark The termination phase ensures that the marking task is completed within a cycle, stops the write barrier
GCoff memory cleaning phase, returns the memory that needs to be recycled to the heap, the write barrier closes the
GCoff memory return phase, returns excess memory to the operating system, and the write barrier closes .

frame

Gin

Please briefly introduce the Gin framework and its advantages.

The Gin framework is a lightweight web framework based on the Go language, which has the advantages of efficiency, speed, and ease of use. Gin uses a middleware mechanism similar to Express.js and provides simple and easy-to-use functions such as routing, error handling, and template engines.

What HTTP request methods does Gin support?

The Gin framework supports common HTTP request methods, including GET, POST, PUT, PATCH, DELETE, HEAD and OPTIONS. These request methods can handle requests by using the methods of the gin.Context object, such as c.Request.Method to get the HTTP method of the current request.

How to handle GET and POST request parameters in Gin?

In Gin, you can obtain GET request parameters through the c.Query() method, which returns a string type value, or set a default value through the c.DefaultQuery() method. For POST requests, you can obtain POST request parameters through the c.PostForm() or c.DefaultPostForm() method. The c.PostForm() method can only parse form data whose Content-Type is application/x-www-form-urlencoded. In addition to parsing this type of form data, the c.DefaultPostForm() method can also parse multipart/form-data type form data.

How to implement routing in Gin framework?

The Gin framework determines the path of function execution through routing. You can use router :=
gin.Default() to create a default routing group, and then use router.GET(), router.POST() and other methods to add routes for different request methods. Routes can contain parameters, for example /:name can match a path fragment of any name and store that fragment in the name variable.

How to handle file upload in Gin?

The Gin framework can handle file uploads through the c.SaveUploadedFile() method, which requires passing two parameters: the file field name in the form and the saved file name. In addition, the Gin framework can also obtain the uploaded file object through the c.FormFile() method, which returns a multipart.FileHeader type value.

microservices

What do you know about microservices?

Microservices, also known as microservices architecture, is an architectural style that structures applications as a collection of small autonomous services modeled on a business domain.
In layman's terms, you have to see how bees build their honeycombs by aligning hexagonal wax cells. They initially started with small sections using various materials and went on to build a large beehive from them. These cells form patterns, creating a strong structure that holds specific parts of the honeycomb together.
Here, each cell is independent of another cell, but it is also related to other cells. This means that damage to one cell does not damage other cells, so bees can rebuild these cells without affecting the complete hive.

Talk about the advantages of microservice architecture

Insert image description here

What are the characteristics of microservices?

⚫Decoupling - Services within the system are largely separated. Therefore, the entire application can be easily built, changed, and extended
⚫ Componentization — Microservices are treated as independent components that can be easily replaced and upgraded
⚫ Business capabilities — Microservices are very simple and focused on a single function
⚫ Autonomy — Developers and teams can Work independently of each other, thereby increasing speed
⚫ Continuous Delivery – allows frequent software releases through system automation of software creation, testing and approval
⚫ Accountability – Microservices do not focus on the application as a project. Instead, they treat applications as products they are responsible for
⚫ Decentralized governance—the focus is on using the right tools for the right job. This means there is no standardized model or any technical model. Developers are free to choose the most useful tools to solve their problems
⚫ Agile — Microservices support agile development. Any new features can be developed quickly and discarded again

What are the best practices for designing microservices?

Insert image description here

How does a microservices architecture work?

Microservices architecture has the following components:
⚫Client – ​​Different users from different devices send requests.
⚫Identity Provider – Verifies user or customer identity and issues security tokens.
⚫API Gateway – handles client requests.
⚫Static Content – ​​holds all the content of the system.
⚫Management – ​​Balancing services on nodes and identifying failures.
⚫Service Discovery – A guide to finding communication paths between microservices.
⚫Network – a distributed network of proxy servers and their data centers.
⚫Remote Services – Enables remote access to information residing on a network of IT equipment.
Insert image description here

The remaining part is in the link under the interview question document. Click here to download for free.

Container technology

Insert image description here
The remaining part is in the link under the interview question document. Click here to download for free.

Redis

Insert image description here
The remaining part is in the link under the interview question document. Click here to download for free.

Mysql

Insert image description here

The remaining part is in the link under the interview question document. Click here to download for free.

LINUX

Insert image description here

The remaining part is in the link under the interview question document. Click here to download for free.

cache

Insert image description here

The remaining part is in the link under the interview question document. Click here to download for free.

Network and operating systems

Insert image description here
The remaining part is in the link under the interview question document. Click here to download for free.

message queue

Insert image description here
The remaining part is in the link under the interview question document. Click here to download for free.

distributed

Insert image description here
The remaining part is in the link under the interview question document. Click here to download for free.

Guess you like

Origin blog.csdn.net/m0_73728511/article/details/132720666