Spark core common big data interview questions

Article Directory

1. Spark deployment mode

1. Local mode

  • Spark does not have to run in the hadoop cluster. It can be specified locally and with multiple threads. The spark application is directly run locally in a multi-threaded manner, generally for the convenience of debugging, and the local mode is divided into three categories
  • 1) local: only start one executor
  • 2) local[k]: start k executors
  • 3) local[*]: start executor with the same number of cpu

2.standalone mode

  • Distributed deployment cluster, comes with complete services, resource management and task monitoring are monitored by spark itself, this mode is also the basis of other modes

3.spark on yarn mode

  • Distributed deployment clusters, resource and task monitoring are handed over to yarn management, and the spark client directly connects to yarn without the need to build a spark cluster. There are two modes of yarn-client and yarn-cluster. The main difference is: the running node of the Driver program
  • 1) The cluster is suitable for production, and the driver runs on the sub-nodes of the cluster and has fault tolerance
  • 2) The client is suitable for debugging, and the driver runs on the client

2. Driver function

  • When a spark job runs, it includes a Driver process, which is also the main process of the job. It has a main function and an instance of SparkContext, which is the entry point of the program.
  • Function: Responsible for applying for resources from the cluster, registering information with the master, responsible for job scheduling, responsible for job parsing, generating stages and scheduling tasks to Executor. Including DAGScheduler, TaskScheduler

3. Hadoop and spark are both parallel computing, what are the similarities and differences between them

  • Both use the mr model to perform parallel calculations. A job of Hadoop is called a job. The job is divided into map task and reduce task. Each task runs in its own process. When the task ends, the process is also Will end
  • The tasks submitted by spark users are called applications. An application corresponds to a SparkContext. There are multiple jobs in the app. Each time an action operation is triggered, a job will be generated. These jobs can be executed in parallel or serially, and each job has multiple stages. , The stage is divided into jobs by DAGScheduler through the dependencies between RDDs in the shuffle process. There are multiple tasks in each stage. The taskset is composed of TaskScheduler and distributed to each executor for execution. The life cycle of the executor is the same as the app. Even if there is no job running, it still exists, so tasks can be quickly started to read memory for calculations. Spark's iterative calculations are all performed in memory. API provides a large number of RDD operations such as joinmgroupby, etc., and it can be achieved through DAG graphs. Fault tolerance
  • Hadoop's job only has map and reduce operations, and its expression ability is relatively lacking and it will repeatedly read and write hdfs during the mr process, resulting in a large number of io operations, and multiple jobs need to manage the relationship themselves.

R .RDD

  • RDD (resilient distributed dataset) is called a resilient distributed dataset. It is the most basic data abstraction in spark. It represents an immutable, partitionable, and the five features of RDD, a collection of elements that can be calculated.
  • 1) A list of paritions A list of partitions, the data in the RDD is stored in a list of partitions
  • 2) A function for computing each split function in each partition
  • 3) A list of dependencies on the RDDs. An RDD depends on multiple other RDDs. This is very important. The fault tolerance mechanism of RDDs is based on this feature.
  • 4) Optionally, a Partitioner for key-value RDDs (eg to say that the RDD is hash-partitioned) Optional, RDD for kv type only has this feature, the role is to determine the source of the data and the data after processing Whereabouts
  • 5) Optionally, a list of preferred locations to compute each split on(eg block locations for an HDFS file) optional, data locality, data location optimal

V. Briefly describe the concepts of wide and narrow dependencies. What are the dependencies of groupByKey, reduceByKey, map, filter, and union?

1. Narrow dependence

  • It means that each partition of the parent RDD is used by the partition of one child RDD at most, which means that a partition of a parent RDD corresponds to a partition of a child RDD, and the partitions of two parent RDDs correspond to a partition of an RDD. map/filter and union It belongs to the first category, and the co-partitioned join of the input belongs to the second category

2. Wide dependence

  • The partition of the child RDD depends on all the partitions of the parent RDD, this is because of the shuffle operation

Width dependence of operator

  • Transformations such as map, filter, union, etc. on RDD are generally narrow dependencies
  • Wide dependency is generally to perform groupByKey, reduceByKey and other operations on the RDD, which is to repartition the data in the partition in the RDD (shuffle)
  • The join operation may be either a wide dependency or a narrow dependency. When the RDD is to be joined, if the RDD is heavily partitioned, it is a narrow dependency, otherwise it is a wide dependency.

6. How does spark prevent memory overflow

1. Memory overflow on the driver side

  • You can increase the memory parameters of the driver: spark.driver.memory(default 1g)
  • This parameter is used to set the memory of the driver. In the spark program, SparkContext and DAGScheduler are all run on the driver side. The stage segmentation corresponding to rdd is also run on the driver side. If the program written by the user has too many steps, split If there are too many stages, this part of information consumes the memory of the driver. At this time, you need to increase the memory of the driver.

2. The map process produces a large number of objects and causes memory overflow

  • The reason for this overflow is caused by a large number of objects in a single map, for example: rdd.map(x=>for(i<- 1 to 10000) yield i.toString), this operation is in rdd, each Objects have produced 10,000 objects, which is certainly easy to cause memory overflow problems. For this kind of problem, without increasing the memory, you can reduce the size of each task to achieve each task even if a large number of objects are generated Executor's memory can also be installed. The specific approach can be to call the repartition method before the map operation that will generate a large number of objects, partition into smaller blocks and pass in the map. For example: rdd.repartition(10000).map(x=> for(i<- 1 to 10000) yield i.toString)
  • Faced with this kind of problem, note that you cannot use the rdd.coalesce method. This method can only reduce partitions, cannot increase partitions, and there will be no shuffle process.

3. Data skew causes memory overflow

  • In addition to data imbalance may cause memory overflow, it may also cause performance problems. The solution is similar to the above, which is to call repartition to repartition.

4. Memory overflow after shuffle

  • The memory overflow of shuffle can be said to be caused by the excessive size of a single file after shuffle. In spark, join, reduceByKey, this type of process will have a shuffle process, in the use of shuffle, you need to pass in a partitioner, most of spark In the shuffle operation, the default partitioner is HashPartitioner. The default value is the maximum number of partitions in the parent RDD. This parameter is controlled by spark.default.parallelism (use saprk.sql.shuffle.partitions in spark-sql). It is only valid for HashPartitioner, so if it is another Partitioner or a partitioner implemented by yourself, you cannot use this parameter to control the amount of concurrency of shuffle. If the shuffle memory overflow caused by another partitioner, you need to increase the number of partitions from the partitioner code

5. Uneven resource allocation in standalone mode causes memory overflow

  • In standalone mode, if configured --total-executor-coresand --executor-memorythese two parameters, but is not configured --executor-cores, this parameter, it is possible to cause each Executor of memory is the same, but a different number of cores, then the number of cores of the executor, because Being able to execute multiple tasks at the same time can easily lead to memory overflow. The solution to this situation is to configure --executor-coresor spark.executor.coresparameter at the same time to ensure that executor resources are evenly distributed

6. 使用 rdd.persist (StorageLevel.MEMORY_AND_DISK_SER) 代替 rdd.cache ()

  • rdd.cache() and rdd.persist(Storage.MEMORY_ONLY) are equivalent. The data of rdd.cache() will be lost when the memory is insufficient, and will be recalculated when used again, and rdd.persist(StorageLevel.MEMORY_AND_DISK_SER ) When the memory is insufficient, it will be stored on the disk to avoid recalculation, just consume some IO time

Seven. The difference and division of stage task and job

  • job: a parallel calculation composed of multiple tasks, when you need to execute an rdd action, a job will be generated
  • Stage: Each job is split into smaller task (task) groups called stages. The stages are mutually dependent, and each stage will be executed in sequence in the order of execution.
  • task: A unit of work to be sent to the executor, which is a task execution unit of the stage.Generally speaking, as many partitions as an RDD, there will be as many tasks, because each task only processes data on one partition

8. Spark submission job parameters

  • executors-cores ------The number of cores used by each executor, the default is 1, the official recommendation is 2-5
  • num-executors ------The number of executors to start, the default is 2
  • executor-memory ------executor memory size, default 1G
  • driver-cores ------The number of cores used by the driver, the default is 1
  • driver-memory ------Driver memory size, default 512M
#如下是一个提交任务的样式:
spark-submit \
--master local[5]	\
--driver-cores	2	\
--driver-memory	8g	\
--executor-cores	4	\
--num-executors	10	\
--executor-memory	8g	\
--class PackageName.ClassName	XXXX.jar	\
--name "spark job name"	\
InputPath	\
OutputPath

Nine. The difference between reduceByKey and groupByKey

  • reduceByKey is used to merge multiple values ​​corresponding to each key. The most important thing is that it can perform the merge operation locally, and the merge operation can be customized through functions
  • groupByKey also operates on multiple values ​​corresponding to each key, but only generates a sequence in a summary. It cannot customize the function itself, and can only be implemented through map (func).
  • On large data sets, the effect of reduceByKey is better than that of groupByKey, because reduceBykey will merge the data before shuffle, and the transmission speed is better than groupByKey
  • combineByKey is a relatively low-level operator, which is called by reduceByKey

10. The difference between foreach and map

  • The common point of the two methods: both are used to traverse the collection object and execute the specified method for each item
  • The difference between the two:
  • 1) Foreach has no return value (to be precise, it returns Unit), map returns a collection object. Foreach is used to traverse a collection, and map is to map a collection to another collection
  • 2) The processing logic in foreach is serial, and the processing in map. The logic is parallel
  • 3) map is a conversion operator, foreach is an action operator

11. The difference between map and mapPartitions

The same point : map with mapPartitions belong conversion operator
differences :

  • 1. Essence
  • 1) map operates on each element in rdd
  • 2) mapPartitions operates on the iterator of each partition in rdd
  • 2. The situation where the number of each partition in RDD is not large
  • 1) Map operation performance is underground. For example, there are 10,000 pieces of data in a partition, then when analyzing each partition, the function needs to be executed and calculated 10,000 times
  • 2) The performance of mapPartitions is high. After using the mapPartitions operation, a task will only execute function once. The function receives all partition data once, and it only needs to be executed once, and the performance is relatively high.
  • 3.The situation where the data volume of each partition in the RDD is extremely large: for example, a Partition has 1 million data
  • 1) Map can be executed normally
  • 2) After mapPartitions is passed one function at a time, the memory may not be enough at once, causing OOM (memory overflow)

12. The difference between foreach and foreachPartition

The same point : foreach and foreachPartition belong to operator actions
difference:

  • 1) Foreach processes one piece of data in RDD each time
  • 2) foreachPartition processes the data in the iterator of each partition in the RDD each time

13. Is sortByKey a global sort?

sortByKey is a global sort

  • 1) Use partitioner to divide the data according to the data range before sortByKey
  • 2) Make all the data in the p1 partition less than p2, and all the data in the p2 partition less than p3, and so on (p1-pn is the partition identifier)
  • 3) Then use the sortByKey operator to sort each Partition, so that the global data is sorted

14. The difference between coalesce and repartition

  • We often think that coalesce does not produce shuffle, avoiding repartition, and producing shuffle is efficient, but the actual situation often needs to be analyzed in detail according to specific problems. Coalesce efficiency is not necessarily high, and sometimes there are big holes. This operator should be used with caution.
  • Both coalesce and repartition repartition the RDD partition, and repartition is only the implementation of shuffle true in the coalesce interface

Detailed example

  • Suppose the source RDD has N partitions and needs to be re-divided into M partitions
  • If N<M. In general, N partitions have uneven data distribution. Use the HashPartitioner function to repartition the data into M, then you need to set shuffle to true (repartition is implemented, coalesce can't be implemented)
  • If N>M and similar to M, (if N is 1000 and M is 100), then several of the N partitions can be merged into a new partition, and finally merged into M partitions, then you can Shuffle is set to false (coalesce implementation). If M>N, coalesce is invalid and the shuffle process is not performed. There is a narrow dependency between the parent RDD and the child RDD, and the number of files (partition) cannot be increased. In short, if the shuffle is When false, if the parameter passed in is greater than the number of existing partitions, the number of partitions of the RDD will not change, that is to say, the number of partitions of the RDD cannot be increased without shuffle.
  • If N>M and there is a big difference between the two, then it depends on the relationship between the number of executors and the partitions to be generated.If the number of executors <= the number of partitions to be generated, coalesce is more efficient, otherwise if coalesce is used, it will cause (number of executors-partitions to be generated Number) executors run empty to reduce efficiency. If M is 1, in order to make the previous operations of coalesce have better parallelism, you can set shuffle to true

15. Spark lineage-the dependency between RDDs

  • The solution adopted to deal with the problem of data fault tolerance (node ​​failure/data loss) in a distributed computing environment. In order to ensure the robustness (also called robustness) of the data in the RDD, the RDD data set is recorded through the so-called lineage Live how it evolved from other RDDs. Compared with the fine-grained memory data update level backup or LOG mechanism of other systems, RDD's lineage records the coarse-grained specific data transformation (Transformation) operation behavior When part of the partition data of this RDD is lost, it can obtain enough information through lineage to recalculate and restore the lost data partition. This coarse-grained data model limits the application of spark, but at the same time it is compared with fine-grained data. The data model of the degree also brings performance improvement
  • RDD is divided into two types in lineage dependency: narrow dependency and wide dependency, used to solve the efficiency of data fault tolerance
  • **Narrow dependency:** means that each partition of the parent RDD is at most one child RDD partition corresponding to a child RDD partition, which is expressed as a parent RDD partition corresponding to a child RDD partition, or multiple parent RDDs The partition corresponds to the partition of a child RDD, which means that a partition of a parent RDD cannot correspond to multiple partitions of a child RDD
  • **Wide dependency:** means that the partition of the child RDD depends on multiple partitions or all partitions of the parent RDD, that is to say, there is a partition of a parent RDD corresponding to multiple partitions of a child RDD
  • For wide dependence, the input and output of this calculation are at different nodes. The lineage method is intact for the input node, and when the output node is down, this method is effective by recalculating in this case, otherwise it is invalid, because there is no need to re Try, you need to look back to its ancestors to see if you can retry (this is the meaning of lineage, lineage), the data recalculation cost of narrow dependence is much less than the data recalculation cost of wide dependence
  • In RDD calculation, checkpoint is used for fault tolerance. There are two ways to do checkpoint, one is checkpoint data and the other is logging the updates. The user can control which way to achieve fault tolerance, the default is logging the updates, and all generations are tracked through records RDD conversion, that is, to record the lineage of each RDD to recalculate and generate lost partition data

16. Persistence of spark RDD

1.cache() and persist()

  • When performing a persistence operation on an RDD, each node will persist the partition of the RDD that it operates in memory, and in the subsequent repeated use of the RDD, directly use the partition of the memory cache, in this case, for one In scenarios where RDD performs multiple operations repeatedly, you only need to calculate the RDD once, and use the RDD directly later, without the need to calculate the RDD multiple times
  • Clever use of RDD persistence, even in some scenarios, can increase the performance of spark applications by 10 times. For iterative algorithms and fast interactive applications, RDD persistence is very important
  • To persist an RDD, just call its cache() or persist() method. When the RDD is calculated for the first time, it will be directly stored in each node, and spark's persistence mechanism is still automatic fault tolerance If any partition of the persistent RDD is lost, then spark will automatically recalculate the partition through its source RDD using a conversion operation
  • The difference between cache() and persist() is: cache is a simplified way of persist. The bottom layer of cache is the parameter-free version of persist that is called. At the same time, persist(MEMORY_ONLY) is called to persist the data in memory. If needed Remove the cache from memory, then you can call the unpersist method

2.checkPoint

Scenes:

  • When the business scenario is very complicated, the lineage dependency of the RDD will be very long.Once the RDD data of the later descent is lost, spark will recalculate the lost RDD based on the descent dependency, which will cause the calculation time to be too long, spark provides An operator called checkPoint to solve such business scenarios

use:

  • Set a checkpoint for the current RDD. This function creates a binary file and stores it in the checkPoint directory, which is set using SparkContext.setCheckpointDir(). During the checkpoint process, all of the RDD depends on the parent RDD. All information will be removed. The checkpoint operation on the RDD will not be executed immediately, and the Action operation must be executed to trigger

Advantages of checkPoint:

  • Persistent on hdfs, hdfs default 3 copy backup makes persistent backup data more secure
  • Cut off the dependency of RDD: When the business scenario is complex, the dependency of RDD is very long, when the later RDD data is lost, it will go through a long recalculation process, and the use of checkPint will switch to relying on checkPointRDD , Can avoid long lineage recalculation
  • It is recommended to perform a cache operation before checkpoint, which will directly checkpoint the result in memory without restarting the job to recalculate

CheckPoint principle:

  • When the finalRDD executes the Action class operator to calculate the job task, spark will look back from the finalRDD from back to front to see which RDD uses the checkPoint operator
  • Will use checkPoint operator markers
  • Spark will automatically start a job to recalculate the marked RDD, store the calculated result in hdfs, and then cut off the RDD dependency

17. Spark submission process

1. Submit tasks in standalone-client mode

  • 1) Submit the task in client mode and start the driver process on the client
  • 2) The driver will apply to the master for resources to start the application
  • 3) The resource application is successful, and the driver sends the task to the worker for execution
  • 4) The worker returns the task execution result to the driver

2. Submit tasks in standalone-cluster mode

  • 1) After submitting the app in standalone-cluster mode, it will request the master to start the driver
  • 2) After the master accepts the request, it randomly starts the driver process on a node in the cluster
  • 3) Apply for resources for the current application after the driver starts
  • 4) The driver sends the task to the worker node for execution
  • 5) The worker returns the execution status and execution results to the driver

3. Submit tasks in yarn-client mode

  • 1) The client submits an application and starts a driver process on the client
  • 2) After the application is started, it will send a request to RS (ResourceManager) to start AM (ApplicationMaster) resources
  • 3) RS receives the request and randomly selects an NM to start AM, where NM is equivalent to the worker node in standalone
  • 4) After AM starts, it will request a batch of container resources from RS to start executor
  • 5) RS will find a batch of NMs and return to AM to start Executor

4. Submit tasks in yarn-cluster mode

  • 1) The client submits the application program, sends a request to RS, and requests to start AM
  • 2) After receiving the request, RS randomly starts AM on a NM (equivalent to the driver end)
  • 3) AM starts, AM sends a request to RS, requesting a batch of containers to start executor
  • 4) RS returns a batch of NM nodes to AM
  • 5) AM connects to NM and sends a request to NM to start executor
  • 6) The executor is registered to the driver of the node where the AM is located. The driver sends the task to the executor

18. Optimization of spark join

  • Spark is a distributed computing framework, and the place that most affects its execution efficiency is frequent network transmission. Therefore, in general, when there is no data tilt, if you want to improve the execution efficiency of spark job, you should minimize the shuffle process of the job ( Reduce the stage of the job) or reduce the impact of shuffle
  • 1) Minimize the amount of data in the RDD participating in the join
  • 2) Try to avoid that the RDDs participating in the join have duplicate keys
  • 3) Try to avoid or reduce the shuffle process
  • 4) If conditions permit, use map-join to complete the join

19. The shuffle method of spark

There are three shuffle methods, namely: HashShuffle, SortShuffle (default), and TungstenShuffle
setting method in the spark program: Configure by setting spark.shuffle.manager:

//  可设置为hash sort tungsten-sort
  private val session: SparkSession = SparkSession.builder()
    .appName("xxx").master("local[*]").
    config("spark.shuffle.manager","hash").getOrCreate()

HashShuffleManager features:

  • 1) The data is not sorted and the speed is faster
  • 2) Write directly to the buffer, and write to a file after the buffer is full
  • 3) Each task of this ShuffleMapStage will generate the same number of files as the parallelism of the next ShuffleMapStage
  • 4) File operation handle and temporary cache information in the sea, occupying memory and prone to memory overflow

Features of SortShuffleManager:

  • 1) Sort the data
  • 2) Before writing to the cache, if it is an operator such as reduceByKey, it will be written to a Map memory data structure, and if it is an operator such as join, it will be written to the Array memory data structure first. . Before each piece of data is written, it is judged whether it reaches a certain threshold, and then it is written to the buffer
  • 3) The task of multiplexing a core will be written to the same file and an index file will be generated. It records the start offset and end offset of each task in the next ShuffleMapStage.

Twenty. The role of broadcast variables

  • Using broadcast variables, only one copy of the variable resides in the memory of each executor, instead of transmitting a large variable once for each task, it saves a lot of network transmission, which is of great help to performance improvement, and it will be efficient Broadcast algorithm (bit torrent technology) to reduce transmission cost
  • There are many scenarios where broadcast variables are used. We all know that a common optimization method of spark is small table broadcasting. Map join is used instead of reduce join. We broadcast small data sets to each node to save a particularly expensive shuffle operation.
  • For example, there is a table with a small amount of data on the driver, and tasks on other nodes need to lookup this table, then the driver can copy this table to these nodes first, so that the task can look up the table locally.

XXI. Data Tilting Solution

  • Data skew generally occurs because the data corresponding to a key is too large, which causes task execution to be too slow, or memory overflow, OOM, which generally occurs during shuffle, such as reduceByKey, countByKey, and groupByKey are prone to data skew
  • How to solve the data skew, first look at the log log information, because the log log will prompt which line when an error is reported, and then check the place where shuffle occurs. These places are more prone to data skew

Option 1: Aggregate source data

  • Our data generally comes from the hive table, so when the hive table is generated, the data is aggregated, grouped by key, and all the values ​​corresponding to the key are stored in another format, such as splicing a string, you can omit groupByKey And reduceByKey operation, then if there is no such operation, there is no need to shuffle. Without shuffle, there is no data skew. If it cannot be perfectly spliced, but a small amount of splicing can reduce the amount of data corresponding to the key and improve performance.

Option 2: Filter the key that causes skew

  • This solution means that if the business allows or can be understood after communication, we can filter a large number of keys, which can easily solve the problem

Solution 3: Improve the parallelism of reduce in shuffle operations

  • Share the data pressure by increasing the number of task executions on the reduce side, that is to say, increasing the number of task executions will increase the performance accordingly. This method is best if the data skew is solved during operation, but if it is run before it occurs At the time of nine OOM, the number of tasks on the reduce side has been increased, and it can be run, but the execution time is quite long, so give up this plan

Option 4: Use dual aggregation

  • Used for groupByKey and reduceByKey, it is more suitable for join, but it is usually not necessary to do this, that is to say, the first round of keys is broken up, and the original key is changed into a different key (prefixed), which is equivalent to changing The same key is divided into multiple groups, and then local aggregation is performed, then the prefix of each key is removed, and then global aggregation is performed, and two aggregations are performed to avoid data skew problems

Solution 5: Convert reduce join to map join

  • If two RDDs are joined, one table is smaller, and the small table can be broadcasted, so that there is a copy in the blockmanager of each node, so that no shuffle will occur at all, then it is determined that there will be no data skew Problem. If there is a data skew in the join, consider this method for the first time, but if the two tables are large, then this solution is not necessary, this solution is to sacrifice a little memory in exchange for performance improvement

Solution 6: sample sampling decomposition aggregation

  • That is to say, pull the inclined key list out, and then use an rdd to disrupt the join

Option 7: Join using random numbers and expansion

  • That is to say, expand the capacity through faltMap, and then insert the random number in, and then join.In this case, the data skew cannot be solved fundamentally, but it can effectively alleviate the data skew problem and improve performance.

22. Spark communication mechanism

Spark message communication is mainly divided into three parts: overall framework, startup message communication, runtime message communication

1 Overview

  • The remote process communication (RPC) of spark (old version) is implemented through the Akka class library. Akka is developed in the scala language and implemented based on the Actor concurrency model. Akka has the characteristics of high reliability, high performance, and scalability.

2. Specific communication process

  • 1) First start the Master process, then start all the worker processes
  • 2) After the worker is started, establish a connection with the Master in the preStart method, send the registration information to the Master, and encapsulate the worker information through the case class and send it to the Master
  • 3) After receiving the registration message of the Worker, the Master saves it through the collection, and then feedbacks the message of successful registration to the worker
  • 4) The worker will periodically send heartbeat packets to the Master to receive new computing tasks
  • 5) The Master will periodically clean up workers that have timed out

3. Communication framework

  • Spark2.2 uses Netty as the communication framework between master and worker, and the akka framework used before spark2.0
  • 1) Spark start message communication :
  • The worker sends a registration message to the master, and the master returns a successful or failed registration message after processing. If successful, the worker sends a heartbeat to the master regularly
  • 2) Message communication during spark runtime
  • The application SparkContext sends a registration message to the master, and the master assigns an Executor to the application. After the executor starts, it sends a registration success message to the SparkContext, and then the rdd of the SparkContext triggers the Action to form a DAG, and divides the stage through the DAGScheduler and converts it. It becomes a TaskSet, and then TaskScheduler sends an execution message to Executor. After the Executor receives the information, it starts and runs. Finally, the Driver processes the results and reclaims resources.

Guess you like

Origin blog.csdn.net/sun_0128/article/details/107778530