Selected 8 ES high-frequency interview questions and answers, I regret not reading it earlier.

Stop memorizing stereotyped essays dryly, and answer interview questions with specific scenarios!

foreword

When we answer interview questions, we can't memorize stereotyped essays dryly. We must combine application scenarios, preferably with projects we have done in the past, to communicate with interviewers.

Although these scenario questions do not require us to tear the code by hand, we still need to be familiar with the solution ideas and key methods.

This article not only gives common interview questions and answers, but also gives the application scenarios of these knowledge points, and also gives ideas to solve these problems, and provides key codes combined with these ideas. These code segments can be run directly from CV to the local, and all have clearly written comments, welcome everyone to practice, don't memorize stereotyped essays by rote.

1. Fuzzy search

How to perform Fuzzy Search in Elasticsearch?

answer:

In Elasticsearch, you can use Fuzzy Search to find documents similar to a given term. Fuzzy search is an approximate matching method based on edit distance that can handle misspelled or similar words.

In a commercial project of an e-commerce platform, fuzzy search can be used to improve product search function. For example, when a user enters a keyword, fuzzy search can be used to find products similar to the keyword to provide more comprehensive search results.

Code example:

Here is a simple code example demonstrating how to perform a fuzzy search in Elasticsearch:

package main

import (
	"bytes"
	"context"
	"encoding/json"
	"fmt"
	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
	"log"
)

func main() {
    
    
	// 创建Elasticsearch客户端
	cfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://localhost:9200"},
	}
	client, err := elasticsearch.NewClient(cfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the client: %s", err)
	}

	// 构建模糊搜索请求
	var (
		buf    bytes.Buffer
		res    *esapi.Response
		search = map[string]interface{
    
    }{
    
    
			"query": map[string]interface{
    
    }{
    
    
				"fuzzy": map[string]interface{
    
    }{
    
    
					"title": map[string]interface{
    
    }{
    
    
						"value":     "iphone",
						"fuzziness": "AUTO",
					},
				},
			},
		}
	)

	// 将搜索请求转换为JSON格式
	err = json.NewEncoder(&buf).Encode(search)
	if err != nil {
    
    
		log.Fatalf("Error encoding the search query: %s", err)
	}

	// 发送模糊搜索请求
	res, err = client.Search(
		client.Search.WithContext(context.Background()),
		client.Search.WithIndex("products"),
		client.Search.WithBody(&buf),
		client.Search.WithTrackTotalHits(true),
		client.Search.WithPretty(),
	)
	if err != nil {
    
    
		log.Fatalf("Error sending the search request: %s", err)
	}
	defer res.Body.Close()

	// 解析搜索结果
	var result map[string]interface{
    
    }
	if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
    
    
		log.Fatalf("Error parsing the search response: %s", err)
	}

	// 处理搜索结果
	// ...

	fmt.Println(result)
}

Through the above code example, we can see how to use the Elasticsearch client to construct a fuzzy search request and process the returned search results.

This example shows how fuzzy search can be used in a commercial project to improve product search functionality and provide a more comprehensive search experience.

2. Inverted index

What is an inverted index? What does it do in Elasticsearch?

answer:

An inverted index is a data structure used to speed up text searches. It maps each word in each document to a list of documents containing that word.

In commercial projects, such as a news release platform, Elasticsearch's inverted index can map each keyword to a list of news articles containing the keyword, so as to achieve fast keyword search.

Take a chestnut:

The following is a sample code of a simple inverted index based on Go language:

package main

import (
	"fmt"
	"strings"
)

type InvertedIndex map[string][]int

func BuildInvertedIndex(docs []string) InvertedIndex {
    
    
	index := make(InvertedIndex)

	for docID, doc := range docs {
    
    
		words := strings.Fields(doc)
		for _, word := range words {
    
    
			word = strings.ToLower(word)
			if _, ok := index[word]; !ok {
    
    
				index[word] = []int{
    
    }
			}
			index[word] = append(index[word], docID)
		}
	}

	return index
}

func main() {
    
    
	docs := []string{
    
    
		"Hello world",
		"Hello Go",
		"Go programming language",
		"World of Go",
	}

	index := BuildInvertedIndex(docs)

	// 搜索示例
	query := "Go"
	query = strings.ToLower(query)
	if postings, ok := index[query]; ok {
    
    
		fmt.Printf("Documents containing '%s':\n", query)
		for _, docID := range postings {
    
    
			fmt.Println(docs[docID])
		}
	} else {
    
    
		fmt.Printf("No documents containing '%s' found.\n", query)
	}
}

In the above code, we define a InvertedIndextype, which is a map that maps each word to a list of document IDs that contain that word.

BuildInvertedIndexfunction is used to build the inverted index, it loops through each document, adding the words in the document to the inverted index. Finally, we can use the inverted index to search to find documents that contain a specific word.

3. Aggregation operation

How to perform complex aggregation operations in Elasticsearch?

answer:

In Elasticsearch, aggregation operations can be used to perform statistics and analysis on data.

For example, in a commercial project of a social media platform, the aggregation function of Elasticsearch can be used for user behavior analysis. Through the aggregation operation, you can calculate the user's activity, the number of likes and comments, and the topics that the user cares about. These statistical data can help the platform understand user behavior patterns, optimize recommendation algorithms and display personalized content.

Code example:

The following is a sample code of a complex aggregation operation based on Go language, which is used for user behavior analysis in commercial projects of social media platforms:

package main

import (
	"bytes"
	"context"
	"encoding/json"
	"fmt"
	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
	"log"
)

type UserStats struct {
    
    
	Username       string `json:"username"`
	TotalLikes     int    `json:"total_likes"`
	TotalComments  int    `json:"total_comments"`
	TotalFollowers int    `json:"total_followers"`
}

func main() {
    
    
	// 创建Elasticsearch客户端
	cfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://localhost:9200"},
	}
	client, err := elasticsearch.NewClient(cfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the client: %s", err)
	}

	// 构建聚合操作请求
	var (
		buf    bytes.Buffer
		res    *esapi.Response
		search = map[string]interface{
    
    }{
    
    
			"size": 0,
			"aggs": map[string]interface{
    
    }{
    
    
				"user_stats": map[string]interface{
    
    }{
    
    
					"terms": map[string]interface{
    
    }{
    
    
						"field": "username.keyword",
						"size":  10,
					},
					"aggs": map[string]interface{
    
    }{
    
    
						"total_likes": map[string]interface{
    
    }{
    
    
							"sum": map[string]interface{
    
    }{
    
    
								"field": "likes",
							},
						},
						"total_comments": map[string]interface{
    
    }{
    
    
							"sum": map[string]interface{
    
    }{
    
    
								"field": "comments",
							},
						},
						"total_followers": map[string]interface{
    
    }{
    
    
							"sum": map[string]interface{
    
    }{
    
    
								"field": "followers",
							},
						},
					},
				},
			},
		}
	)

	// 将聚合操作请求转换为JSON格式
	if err := json.NewEncoder(&buf).Encode(search); err != nil {
    
    
		log.Fatalf("Error encoding the search query: %s", err)
	}

	// 发送聚合操作请求
	res, err = client.Search(
		client.Search.WithContext(context.Background()),
		client.Search.WithIndex("social_media"),
		client.Search.WithBody(&buf), client.Search.WithTrackTotalHits(true), client.Search.WithPretty())
	if err != nil {
    
    
		log.Fatalf("Error sending the search request: %s", err)
	}
	defer res.Body.Close()
	// 解析聚合操作的响应
	var result map[string]interface{
    
    }
	if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
    
    
		log.Fatalf("Error parsing the search response: %s", err)
	}

	// 处理聚合操作的结果
	aggregations := result["aggregations"].(map[string]interface{
    
    })
	userStatsBucket := aggregations["user_stats"].(map[string]interface{
    
    })["buckets"].([]interface{
    
    })

	userStats := make([]UserStats, len(userStatsBucket))
	for i, bucket := range userStatsBucket {
    
    
		b := bucket.(map[string]interface{
    
    })
		userStats[i] = UserStats{
    
    
			Username:       b["key"].(string),
			TotalLikes:     int(b["total_likes"].(map[string]interface{
    
    })["value"].(float64)),
			TotalComments:  int(b["total_comments"].(map[string]interface{
    
    })["value"].(float64)),
			TotalFollowers: int(b["total_followers"].(map[string]interface{
    
    })["value"].(float64)),
		}
	}

	// 打印用户行为统计结果
	for _, stats := range userStats {
    
    
		fmt.Printf("Username: %s\n", stats.Username)
		fmt.Printf("Total Likes: %d\n", stats.TotalLikes)
		fmt.Printf("Total Comments: %d\n", stats.TotalComments)
		fmt.Printf("Total Followers: %d\n", stats.TotalFollowers)
		fmt.Println("-----------------------")
	}
}

In the above code, we use Elasticsearch's aggregation operation to calculate the user's activity, the number of likes and comments, and the number of followers. By constructing an aggregation operation request and parsing the returned aggregation results, we can obtain statistics on user behavior.

4. Data redundancy and high availability

How to deal with data redundancy and high availability in Elasticsearch?

answer:

In commercial projects, such as online e-commerce platforms, Elasticsearch's data redundancy and high availability mechanisms can be used to ensure the safety and reliability of order data.

Redundant storage and high availability of data can be achieved by configuring an appropriate number of replicas. When the primary shard is unavailable, the replica can take over the service, ensuring continuous access and processing of order data.

Code example:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
)

func main() {
    
    
	// 创建Elasticsearch客户端
	cfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://localhost:9200"},
	}
	client, err := elasticsearch.NewClient(cfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the client: %s", err)
	}

	// 设置索引的副本数
	req := esapi.IndicesPutSettingsRequest{
    
    
		Index: []string{
    
    "orders_index"},
		Body: map[string]interface{
    
    }{
    
    
			"settings": map[string]interface{
    
    }{
    
    
				"index": map[string]interface{
    
    }{
    
    
					"number_of_replicas": 2,
				},
			},
		},
	}

	// 发送设置副本数的请求
	res, err := req.Do(context.Background(), client)
	if err != nil {
    
    
		log.Fatalf("Error setting the number of replicas: %s", err)
	}
	defer res.Body.Close()

	// 检查响应状态
	if res.IsError() {
    
    
		log.Fatalf("Error setting the number of replicas: %s", res.Status())
	}

	// 打印设置副本数成功的消息
	fmt.Println("Number of replicas set successfully for orders_index")
}

In the above code, we use Elasticsearch's Indices Put Settings API to set the number of replicas of the index. In the example, we set the index name of the orders data to orders_index and the number of replicas to 2. In this way, Elasticsearch will create two copies of the index to achieve redundant storage and high availability of data.

5. Performance optimization

How to optimize the performance of Elasticsearch?

answer:

  • Hardware optimization: configure appropriate hardware resources, such as increasing memory, optimizing disk I/O performance, etc., to improve the overall performance of Elasticsearch.
  • Sharding and copy optimization: According to the data volume and query load requirements, adjust the number and distribution of shards and copies to balance data distribution and query load.
  • Index and mapping optimization: Design reasonable indexes and mappings, choose appropriate field types, analyzers, and tokenizers to improve search and aggregation performance.
  • Query and filter optimization: Use appropriate queries and filters to avoid excessive use of full-text search and aggregation operations to improve query performance.
  • Cache and warm-up optimization: Use a cache mechanism, such as Elasticsearch's request cache or external cache, to cache the results of frequent queries to reduce the overhead of repeated calculations. The warm-up mechanism can load common data when the system starts, and prepare the results of popular queries in advance.
  • Index lifecycle management: Periodically delete expired data and indexes based on data usage to reduce storage and query loads.
  • Monitoring and tuning: Use Elasticsearch's monitoring tools and indicators to monitor cluster health, node load, response time, and resource utilization, etc.

for example:

package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
)

func main() {
    
    
	// 创建Elasticsearch客户端
	cfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://localhost:9200"},
	}
	client, err := elasticsearch.NewClient(cfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the client: %s", err)
	}

	// 配置索引的刷新间隔
	req := esapi.IndicesPutSettingsRequest{
    
    
		Index: []string{
    
    "my_index"},
		Body: map[string]interface{
    
    }{
    
    
			"index": map[string]interface{
    
    }{
    
    
				"refresh_interval": "30s",
			},
		},
	}

	// 发送设置刷新间隔的请求
	res, err := req.Do(context.Background(), client)
	if err != nil {
    
    
		log.Fatalf("Error setting the refresh interval: %s", err)
	}
	defer res.Body.Close()

	// 检查响应状态
	if res.IsError() {
    
    
		log.Fatalf("Error setting the refresh interval: %s", res.Status())
	}

	// 打印设置刷新间隔成功的消息
	fmt.Println("Refresh interval set successfully for my_index")

	// 等待一段时间,以便索引刷新
	time.Sleep(5 * time.Second)

	// 构建搜索请求
	reqSearch := esapi.SearchRequest{
    
    
		Index: []string{
    
    "my_index"},
		Body: map[string]interface{
    
    }{
    
    
			"query": map[string]interface{
    
    }{
    
    
				"match": map[string]interface{
    
    }{
    
    
					"title": "example",
				},
			},
		},
	}

	// 发送搜索请求
	resSearch, err := reqSearch.Do(context.Background(), client)
	if err != nil {
    
    
		log.Fatalf("Error sending the search request: %s", err)
	}
	defer resSearch.Body.Close()

	// 解析搜索结果
	// ...

	fmt.Println("Search request completed successfully")
}

In the above code, we use Elasticsearch's Indices Put Settings API to set the refresh interval of the index. By setting a longer refresh interval (for example, 30 seconds), the frequency of refresh operations can be reduced, thereby improving performance. Then, we send a search request to verify the effect of performance optimization.

6. Data Consistency

How to handle data consistency in Elasticsearch?

answer:

In commercial projects, such as online payment platforms, data consistency is critical. To handle data consistency in Elasticsearch, the following approaches can be taken:

  • Use transaction mechanism: When performing operations involving multiple documents, use transaction mechanism to ensure data consistency. For example, in a commercial project of an online payment platform, when a user initiates a payment request, transactions can be used to simultaneously update the order status and user account balance to ensure data consistency.
  • Use optimistic concurrency control: In concurrent write scenarios, use optimistic concurrency control mechanisms to handle data consistency. For example, in a business project on a social media platform, when multiple users like the same article at the same time, optimistic concurrency control can be used to ensure the consistency of the number of likes.
  • Use version control: When updating documents, use a version control mechanism to handle concurrent write conflicts. For example, in a commercial project of a blogging platform, when multiple users edit the same article at the same time, version control can be used to handle concurrent write conflicts and ensure data consistency.
  • Use distributed locks: In a distributed environment, use a distributed lock mechanism to handle concurrent write conflicts. For example, in a business project of an online reservation platform, when multiple users reserve the same resource at the same time, distributed locks can be used to ensure the consistency of reservations.

for example:

The following is an explicit code example showing how to use the Go language and the Elasticsearch API to handle data consistency:

package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
)

func main() {
    
    
	// 创建Elasticsearch客户端
	cfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://localhost:9200"},
	}
	client, err := elasticsearch.NewClient(cfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the client: %s", err)
	}

	// 定义事务操作
	transaction := func() error {
    
    
		// 开始事务
		reqBegin := esapi.XPackSecurityAuthenticateRequest{
    
    }
		resBegin, err := reqBegin.Do(context.Background(), client)
		if err != nil {
    
    
			return fmt.Errorf("Error beginning the transaction: %s", err)
		}
		defer resBegin.Body.Close()

		// 执行事务操作
		// ...

		// 提交事务
		reqCommit := esapi.XPackSecurityInvalidateTokenRequest{
    
    }
		resCommit, err := reqCommit.Do(context.Background(), client)
		if err != nil {
    
    
			return fmt.Errorf("Error committing the transaction: %s", err)
		}
		defer resCommit.Body.Close()

		return nil
	}

	// 执行事务
	err = transaction()
	if err != nil {
    
    
		log.Fatalf("Error executing the transaction: %s", err)
	}

	fmt.Println("Transaction executed successfully")
}

In the above code, we define a transaction function for performing transaction operations. In a transaction, we can perform a series of operations, such as updating multiple documents or executing complex business logic. In the example, we used Elasticsearch's XPack Security API to simulate the transaction start and commit operations.

7. Data Security

How to protect data security in Elasticsearch?

answer:

Protecting data security in Elasticsearch is one of the important tasks in commercial projects. Here are some ways to keep your data safe:

  • Access Control: Use Elasticsearch security features such as access control lists (ACLs) and role-based access control (RBAC) to restrict access to sensitive data. For example, in a commercial project for a healthcare application, it can be set that only authorized doctors can access patient medical record data.
  • Data encryption: Use SSL/TLS to encrypt communication to ensure the security of data during transmission. For example, in a financial application business project, SSL/TLS can be used to encrypt user transaction data to protect user privacy and security.
  • Data backup and recovery: Regularly back up data and ensure safe storage of backup data. In a commercial project, such as an online storage platform, users' file data can be regularly backed up, and measures should be taken to ensure the integrity and reliability of the backup data.
  • Audit log: Record and monitor the access and operation of Elasticsearch, so as to detect and respond to potential security threats in time. For example, in a commercial project of an enterprise collaboration platform, user login, file access and editing operations can be recorded for auditing and tracking of data usage.

for example:

The following code example shows how to use Go language and Elasticsearch API to implement access control and data encryption:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
)

func main() {
    
    
	// 创建Elasticsearch客户端
	cfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://localhost:9200"},
		Username:  "admin",
		Password:  "password",
	}
	client, err := elasticsearch.NewClient(cfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the client: %s", err)
	}

	// 设置索引的访问控制列表(ACL)
	reqACL := esapi.SecurityPutRoleMappingRequest{
    
    
		Name: "doctor_role_mapping",
		Body: map[string]interface{
    
    }{
    
    
			"roles": []string{
    
    "doctor_role"},
			"users": []string{
    
    "doctor_user"},
		},
	}

	// 发送设置访问控制列表的请求
	resACL, err := reqACL.Do(context.Background(), client)
	if err != nil {
    
    
		log.Fatalf("Error setting the ACL: %s", err)
	}
	defer resACL.Body.Close()

	// 检查响应状态
	if resACL.IsError() {
    
    
		log.Fatalf("Error setting the ACL: %s", resACL.Status())
	}

	// 打印设置访问控制列表成功的消息
	fmt.Println("ACL set successfully")

	// 设置索引的SSL/TLS加密
	reqTLS := esapi.IndicesPutSettingsRequest{
    
    
		Index: []string{
    
    "patient_data_index"},
		Body: map[string]interface{
    
    }{
    
    
			"settings": map[string]interface{
    
    }{
    
    
				"index": map[string]interface{
    
    }{
    
    
					"number_of_replicas": 1,
					"number_of_shards":   5,
					"refresh_interval":   "1s",
					"codec":              "best_compression",
				},
			},
		},
	}

	// 发送设置SSL/TLS加密的请求
	resTLS, err := reqTLS.Do(context.Background(), client)
	if err != nil {
    
    
		log.Fatalf("Error setting the TLS encryption: %s", err)
	}
	defer resTLS.Body.Close()
	// 检查响应状态
	if resTLS.IsError() {
    
    
		log.Fatalf("Error setting the TLS encryption: %s", resTLS.Status())
	}

	// 打印设置TLS加密成功的消息
	fmt.Println("TLS encryption set successfully")
}

In the above code, we use Elasticsearch's Security API to set the access control list (ACL) and SSL/TLS encryption of the index. In the example, we set up a doctor_role_mappingrolemap called , which associates the doctor user with the doctor role, and set up patient_data_indexSSL/TLS encryption for an index called .

8. Data synchronization and replication

How to handle data synchronization and replication in Elasticsearch?

answer:

In commercial projects, such as a multi-regional e-commerce platform, data synchronization and replication are crucial. To handle data synchronization and replication in Elasticsearch, the following approaches can be taken:

  • Use Elasticsearch's copy mechanism: By configuring an appropriate number of copies, the data is copied to different nodes to achieve redundant storage and high availability of data. When the primary shard is unavailable, the replica can take over the service, ensuring continuous access and processing of data.
  • Use Elasticsearch's cross-cluster replication function: By setting up cross-cluster replication, data can be replicated to different clusters to achieve data synchronization and replication across regions. For example, in a commercial project of a multi-regional e-commerce platform, data can be replicated to clusters in different geographical locations to ensure that data is backed up on nodes in different regions. This can improve data availability and disaster recovery capabilities, and ensure users' access experience in different regions.

Code example:

The following is a simple sample code showing how to use Elasticsearch's cross-cluster replication feature:

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
)

func main() {
    
    
	// 创建源集群的Elasticsearch客户端
	sourceCfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://source-cluster:9200"},
	}
	sourceClient, err := elasticsearch.NewClient(sourceCfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the source client: %s", err)
	}

	// 创建目标集群的Elasticsearch客户端
	targetCfg := elasticsearch.Config{
    
    
		Addresses: []string{
    
    "http://target-cluster:9200"},
	}
	targetClient, err := elasticsearch.NewClient(targetCfg)
	if err != nil {
    
    
		log.Fatalf("Error creating the target client: %s", err)
	}

	// 设置跨集群复制的请求体
	reqBody := `{
		"remote_cluster": {
			"remote_cluster_name": "source-cluster",
			"seed_hosts": ["source-cluster:9300"]
		},
		"leader_index_patterns": ["index1-*"],
		"follower_index_prefix": "replica-"
	}`

	// 发送跨集群复制的请求
	res, err := targetClient.CrossClusterReplication.FollowIndex(
		"follower-index",
		reqBody,
		targetClient.CrossClusterReplication.FollowIndex.WithContext(context.Background()),
	)
	if err != nil {
    
    
		log.Fatalf(""Error sending the follow index request: %s", err) 
    }
    
    // 解析跨集群复制的响应
defer res.Body.Close()
if res.IsError() {
    
    
	log.Fatalf("Follow index request failed: %s", res.Status())
}

// 处理跨集群复制的响应
fmt.Println("Follow index request successful")

Through the above code examples, we can see how to use Elasticsearch's cross-cluster replication function to achieve data synchronization and replication. In commercial projects, this method can be used in multi-regional e-commerce platforms to ensure that data is backed up on nodes in different regions, improving data availability and disaster recovery capabilities.

Summarize

I believe that after reading these interview questions, you have a better understanding of what I said at the beginning:

When we answer interview questions, we can't memorize stereotyped essays dryly. We must combine application scenarios, preferably with projects we have done in the past, to communicate with interviewers.

Although these scenario questions do not require us to tear the code by hand, we still need to be familiar with the solution ideas and key methods.

This article not only gives common interview questions and answers, but also gives the application scenarios of these knowledge points, and also gives ideas to solve these problems, and provides key codes combined with these ideas. These code segments can be run directly from CV to the local, and all have clearly written comments, welcome everyone to practice, don't memorize stereotyped essays by rote.

Finally, organizing is not easy, and originality is even more difficult. Your likes, comments, and forwarding are the greatest support for me!

Search the whole web: 王中阳Go, to get more information about interview questions.

Guess you like

Origin blog.csdn.net/w425772719/article/details/131393717