【iOS】Summary of multi-threading & locking issues

Preface

Summary of iOS Locks and Multithreading

1. Multi-threading you understand

Multithreading is the ability to execute multiple threads (subtasks) simultaneously to improve program performance and responsiveness. It allows multiple tasks to be processed concurrently in one program.

  • Concurrency: Multiple threads are running simultaneously in a period of time, and the computer implements multi-threaded tasks by switching between different threads.

advantage

  1. Greatly improves the running speed of the program.
  2. Using threads can put long-term tasks to be processed later, thus improving the user experience.

shortcoming

  1. If there are a large number of threads, performance will be affected because the operating system needs to switch between them.
  2. More threads require more memory space.

2. The difference between atomic and nonatomic and their functions

  1. Atomic atomic operation: locking to ensure thread safety of setter and getter access methods (only setter and getter methods are locked) . Because the thread is locked, when other threads access the current attribute, they will first complete the current operation of the attribute.
  • Set and get operations on the same object are performed sequentially.
  • The speed is not fast because the operation must be completed as a whole.
  • Thread safety requires consuming a lot of system resources to lock attributes.
  • Use atomicdoes not guarantee absolute thread safety, because it only locks the and methods atomicgenerated by the system . For operations that absolutely guarantee thread safety, a more advanced method needs to be used. , @lock ensures thread safety.settergetterNSSpinLocksyncronized
  1. Nanatomic is a non-atomic operation, does not lock, and thread execution is fast, but multiple threads accessing the same property may cause a crash.
  • Not the default
  • It is faster. If two threads access the same property, it may cause a crash.
  • Not thread safe
  1. The main difference between atomic and nonatom is that the getter/setter methods automatically generated by the system are different.
  • atomicgetterThe / method automatically generated by the system setterwill perform locking operations.
  • nonatomicgetterThe / method automatically generated by the system setterwill not be locked.

⚠️: atomicModified attributes, generated by the system, getter/setterwill ensure getthe setintegrity of the operation and will not be affected by other threads. For example, thread A's getter method is running halfway through, and thread B calls it setter: then thread A's gettergetter method can still get an intact object.

3. Queue types of GCD - three queue types

  1. The main queue (main thread serial queue) has the same function as the main thread. Tasks submitted to the main queue will be executed in the main thread.
dispatch_get_main_queue() 来获取
  1. Global queue (global concurrent queue) The global concurrent queue is shared by the entire process and has four priority levels: high school (the default is medium), low and background.
dispatch_get_global_queue() 可以设置优先级
  1. Custom queue can be serial or concurrent.
dispatch_queue_create()

4. GCD deadlock problem

Four necessary conditions for thread deadlock

  • Mutual exclusion : A resource can only be occupied by one process at a time.
  • Possessed and waiting : A process itself occupies one or more resources, and at the same time there are unsatisfied resources and is waiting for other processes to release the resources.
  • Cannot be preempted : Others have already occupied a certain resource, and you cannot preempt other resources because you need the resource.
  • Circular waiting : There is a process chain such that each process occupies at least one resource required by the next process.

Concept: The so-called deadlock is usually when two threads A and B are stuck. A is waiting for B, and B is waiting for A, waiting for each other until the value reaches deadlock.

  1. .The main thread serial queue executes tasks synchronously. When the main thread is running, a deadlock will occur.
NSLog(@"1"); // 任务1
dispatch_sync(dispatch_get_main_queue(), ^{
    
    
    NSLog(@"2"); // 任务2
});
NSLog(@"3"); // 任务3

analyze:

  • dispatch_syncRepresents a synchronized thread;
  • dispatch_get_main_queueRepresents the main queue running in the main thread;
  • Task 2 is the task of synchronizing threads.
  • Task 3 needs to wait for Task 2 to finish before executing it.

Why does it cause deadlock?

  1. First execute task 1, which is definitely no problem, but then, if the program encounters a synchronization thread, then it will enter waiting, wait for task 2 to be completed, and then execute task 3. But this is the main queue, a special serial queue. When a task comes, of course it will be added to the end of the queue, and then the FIFOtask will be executed according to the principle. Then, now task 2 will be added to the end, and task 3 will be ranked in front of task 2.

Task 3 cannot be executed until Task 2 is completed, and Task 2 is ranked behind Task 3, which means that Task 2 cannot be executed until Task 3 is completed, so they entered a situation of waiting for each other. [In this case, let’s just stay here] This is a deadlock.

Please add image description

  1. Synchronous and asynchronous nested in each other
// 同步 + 异步 互相嵌套产生死锁
- (void)sync_async {
    
    
    dispatch_queue_t queue = dispatch_queue_create("com.demo.serialQueue", DISPATCH_QUEUE_SERIAL);
    NSLog(@"1"); // 任务1
    dispatch_async(queue, ^{
    
    
        NSLog(@"2"); // 任务2
        dispatch_sync(queue, ^{
    
    
            NSLog(@"3"); // 任务3
        });
        NSLog(@"4"); // 任务4
    });
    NSLog(@"5"); // 任务5
}

Please add image description
Analysis: First, the dispatch_queue_create function creates a DISPATCH_QUEUE_SERIAL serial queue through a custom queue.

  1. Perform task 1.
  2. When encountering an asynchronous thread, add [Task 2, Synchronous Thread, Task 4] to the serial queue. Because it is an asynchronous thread, task 5 in the main thread does not have to wait for all tasks in the asynchronous thread to complete;
  3. Because task 5 does not have to wait, the output order of 2 and 5 cannot be determined;
  4. After task 2 is executed, a synchronization thread is encountered. At this time, task 3 is added to the serial queue;
  5. And because task 4 joins the serial queue earlier than task 3, task 3 must wait for task 4 to complete before it can be executed. However, the synchronization thread where task 3 is located will be blocked, so task 4 must wait for task 3 to finish executing before executing it. This will fall into infinite waiting again, causing a deadlock.
    Insert image description here
    Main thread infinite loop
- (void)async_loop {
    
    
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
    
    
        NSLog(@"1"); // 任务1
        dispatch_sync(dispatch_get_main_queue(), ^{
    
    
            NSLog(@"2"); // 任务2
        });
        NSLog(@"3"); // 任务3
    });
    NSLog(@"4"); // 任务4
    while (1) {
    
    
    }
    NSLog(@"5"); // 任务5
    
    // a打印 4 1 / 1 4 顺序不定   
}

Print result: 4 1 / 1 4 in uncertain order

analyze:

  • First, let’s take a look at which tasks have been added to the Main Queue: [Asynchronous thread, task 4, infinite loop, task 5].
  • The tasks added to the Global Queue asynchronous thread are: [Task 1, Synchronous Thread, Task 3].
  • The first one is an asynchronous thread. Task 4 does not need to wait, so the order of task 1 and task 4 is not necessarily certain.
  • After task 4 is completed, the program enters an infinite loop and the Main Queue is blocked. However, the asynchronous thread added to the Global Queue is not affected and continues to execute the synchronization thread behind task 1.
  • In the synchronization thread, task 2 is added to the main thread, and task 3 waits for task 2 to complete before it can be executed. At this time, the main thread has been blocked by an infinite loop. Therefore, task 2 cannot be executed, and of course task 3 cannot be executed, and task 5 after the infinite loop will not be executed.

In the end, we can only get the results of 1 and 4 in an uncertain order.

5. Differences and connections between multi-threads

Insert image description here

GCD and NSOperation

  • GCDThe execution efficiency is higher. It executes tasks composed of Blocks. It is a lightweight data structure and is more convenient to write.
  • GCDOnly FIFOqueues are supported, and NSOperationQueuethe execution order can be adjusted by setting the maximum number of concurrencies, setting priorities, and adding dependencies.
  • NSOperationDependencies can be set across queues, GCDand the order of execution can only be controlled through methods such as fences.
  • NSOperationIt is more object-oriented and supports KVOsubclasses that can also be added through inheritance and other relationships.
  • So if we need to consider the sequence and dependencies between asynchronous operations, such as multi-threaded concurrent downloads, etc., useNSOperation

The difference between GCD and NSThread

  • NSThreadBy @selectorspecifying the method to be executed, the code is dispersed and relies on the NSObjectclassification to implement communication between threads. If you want to open a thread, you must create multiple thread objects. What is often used is [NSTread current]to view the current thread.
  • NSThreadIt is an object that controls thread execution. It is not as NSOperationabstract. Through it, we can easily get a thread and control it. But NSThreadthe concurrency control between threads needs to be controlled by ourselves, which can be NSConditionachieved through.
  • GCDBy blockspecifying the code to be executed, the code is concentrated, and all the code is written together, making the code simpler, easier to read and maintain, and there is no need to manage the process of thread creation/destruction/reuse! Programmers do not need to care about the life cycle of threads

6. Processes and threads?

Reference: Concepts, differences between processes and threads, and communication between processes and threads
1. Basic concepts:

  • A process is an encapsulation of a runtime program and is the basic unit for resource scheduling and allocation in the system, realizing the concurrency of the operating system.
  • A thread is a subtask of a process and the basic unit of CPU scheduling and allocation. It is used to ensure the real-time performance of program execution and achieve concurrency within the process. A thread is the smallest execution and scheduling unit recognized by the operating system . Each thread occupies a virtual processor alone , etc. Each thread completes different tasks, but shares the same address space (that is, the same dynamic memory, mapped files, object code, etc.), open file queues and other kernel resources .

2. Difference:

  • A thread can only belong to one process, and a process can have multiple threads , but there must be at least one thread. Threads depend on processes for their existence.
  • Process is the smallest unit of resource allocation, and thread is the smallest unit of CPU scheduling.
  • A process has an independent memory unit during execution, and multiple threads share the memory of the process . (Resources are allocated to processes, and all threads of the same process share all resources of the process. Multiple threads in the same process share code segment (code and constants), data segment (global variables and static variables), extended segment (heap storage) .But each thread has its own stack segment, which is also called the runtime period and is used to store all local variables and temporary variables.)
  • Processes will not affect each other; threads: if one thread hangs up, the entire process will hang up.
  • Communication between threads is more convenient. Threads in the same process share data such as global variables and static variables .

3. Communication method :

Inter-process communication method
  1. Inter-process communication mainly includes pipelines, system IPC (including message queues, semaphores, signals, shared memory, etc.), and sockets.
  2. The semaphore is different from the IPC structure that has been introduced. It is a counter that can be used to control access to shared resources by multiple processes . Semaphores are used to implement mutual exclusion and synchronization between processes, rather than storing inter-process communication data.
Communication method between threads
  1. Critical section: access public resources or a piece of code through multi-thread serialization, which is fast and suitable for controlling data access;
  2. Mutex Synchronized/ Lock: Using a mutually exclusive object mechanism, only the thread that owns the mutually exclusive object has permission to access public resources. Because there is only one mutex object, it can be guaranteed that public resources will not be accessed by multiple threads at the same time.
  3. Semaphore Semphare: Designed to control a limited number of user resources. It allows multiple threads to access the same resource at the same time. However, it is generally necessary to limit the maximum number of threads that can access this resource at the same time.
  4. Event (signal), Wait/ Notify: maintain multi-thread synchronization through notification operations, and can also conveniently implement multi-thread priority comparison operations and inter-process communication methods:

6. How to ensure the thread safety of iOS

Reference: What technologies can ensure thread safety in iOS?
Question: A resource may be shared by multiple threads, that is, multiple threads may access the same resource, such as multiple threads accessing the same object, the same variable, and the same file. When multiple threads access the same resource, it is easy to cause data confusion and data security issues. At this point, we need to use thread locks to solve it.

Thread data safety methods:

  1. natomic atomic operation: Use the principle of atomicmulti-thread atomic control to add locks but not locks.atomicsettergetter
  2. Use GCD to implement atomic operations: add a synchronization queue to the setter method and getter method of a field;
- (void)setCount:(NSInteger)newcount
{
    
    
    dispatch_sync(_synQueue, ^{
    
    
         count = newcount;
    });
}
- (NSInteger)count
{
    
    
     __block NSInteger localCount;
     dispatch_sync(_synQueue, ^{
    
    
          localCount = count;
     });
     return localCount;
}
  • Mutex locks can effectively prevent data security problems caused by multi-threads robbing resources, but they require a large amount of CPU resources.
  1. Mutex lock: Use a mutex lock to ensure that only one thread accesses shared resources at the same time. For example @Create synchronizeda mutex lock
@synchronized (self) {
    
    
    // 访问共享资源的代码
}

  1. Spin lock: Spin lock (Spin Lock): Spin lock is a busy waiting lock that continuously tries to acquire the lock until it succeeds. In Objective-C, os_unfair_lockspin locks can be created using .
  2. Semaphore ( Semaphore): A semaphore is a counter used to control the number of threads accessing a resource at the same time.
dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
// 访问共享资源的代码
dispatch_semaphore_signal(semaphore);

  1. Serial queue: Serial queue (Serial Queue): Using a serial queue can ensure that tasks are executed in order, thereby preventing multiple threads from accessing shared resources at the same time. GCD(Grand Central Dispatch)Serial queues can be created using .
dispatch_queue_t serialQueue = dispatch_queue_create("com.example.serialQueue DISPATCH_QUEUE_SERIAL);
dispatch_async(serialQueue, ^{
    
    
    // 访问共享资源的代码
});

Guess you like

Origin blog.csdn.net/weixin_61639290/article/details/132011165