【iOS】GCD in-depth learning

For a brief introduction to GCD and queues, please see: [iOS] GCD Learning

This article mainly introduces the methods in GCD.

Barrier method: dispatch_barrier_async

Sometimes we need to perform two sets of operations asynchronously, and only after the first set of operations is completed, can we start to perform the second set of operations. Of course, the operation group can also contain one or more tasks. This requires the use of methods in the two operation
groups dispatch_barrier_async. The fence
dispatch_barrier_asyncmethod will wait for all tasks previously added to the concurrent queue to be executed before adding the specified task to the asynchronous queue. Then dispatch_barrier_asyncafter the task added by the method is executed, the task is then appended to the asynchronous queue and starts execution. The diagram in the boss's blog is very vivid. The specific diagram is as follows:

Insert image description here

Examples of code usage for the fence method are as follows:

- (void) barrier {
    
    
    dispatch_queue_t queue = dispatch_queue_create("net.testQueue", DISPATCH_QUEUE_CONCURRENT);
        
        dispatch_async(queue, ^{
    
    
            // 追加任务 1
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"1---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queue, ^{
    
    
            // 追加任务 2
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"2---%@",[NSThread currentThread]);      // 打印当前线程
        });
        
        dispatch_barrier_async(queue, ^{
    
    
            // 追加任务 barrier
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"barrier---%@",[NSThread currentThread]);// 打印当前线程
        });
        
        dispatch_async(queue, ^{
    
    
            // 追加任务 3
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"3---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queue, ^{
    
    
            // 追加任务 4
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"4---%@",[NSThread currentThread]);      // 打印当前线程
        });

}

result:
Insert image description here

Perform fence operations after performing operations in front of the fence, and finally perform operations behind the fence.

Synchronous fences and asynchronous fences in GCD

When we considered asynchronous fence + single queue before, the fence only acted on the same queue.

So what kind of interception does it have for tasks in different queues?

For the important part of the fence method, we will experiment with various situations:

Asynchronous fence + single serial queue:

(Since asynchronous execution + serial queue itself is queued for execution in the order of task addition in the only new thread created, it is actually meaningless to add barriers in this case)

- (void) asyncBarrierAndOneSerial {
    
    
    dispatch_queue_t queue = dispatch_queue_create("net.testQueue", DISPATCH_QUEUE_SERIAL);
        
        dispatch_async(queue, ^{
    
    
            // 追加任务 1
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"1---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queue, ^{
    
    
            // 追加任务 2
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"2---%@",[NSThread currentThread]);      // 打印当前线程
        });
        
        dispatch_barrier_async(queue, ^{
    
    
            // 追加任务 barrier
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"barrier---%@",[NSThread currentThread]);// 打印当前线程
        });
        
        dispatch_async(queue, ^{
    
    
            // 追加任务 3
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"3---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queue, ^{
    
    
            // 追加任务 4
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"4---%@",[NSThread currentThread]);      // 打印当前线程
        });
}

result:
Insert image description here

Asynchronous fence + single parallel queue:

(This situation has been described above)

Synchronization barrier + single serial queue:

- (void)syncBarrierAndOneSerial {
    
    
    dispatch_queue_t queue = dispatch_queue_create("net.testQueue", DISPATCH_QUEUE_SERIAL);
    
    dispatch_async(queue, ^{
    
    
        // 追加任务1
        [NSThread sleepForTimeInterval:2];            // 模拟耗时操作
        NSLog(@"1--%@", [NSThread currentThread]);    // 打印当前线程
    });
    
    dispatch_async(queue, ^{
    
    
        // 追加任务2
        [NSThread sleepForTimeInterval:2];            // 模拟耗时操作
        NSLog(@"2--%@", [NSThread currentThread]);    // 打印当前线程
    });
    
    dispatch_barrier_sync(queue, ^{
    
    
        // 追加任务 barrier
        [NSThread sleepForTimeInterval:2];            // 模拟耗时操作
        NSLog(@"2--%@", [NSThread currentThread]);    // 打印当前线程
    });
    
    dispatch_async(queue, ^{
    
    
        // 追加任务3
        [NSThread sleepForTimeInterval:2];            // 模拟耗时操作
        NSLog(@"3--%@", [NSThread currentThread]);    // 打印当前线程
    });
    
    dispatch_async(queue, ^{
    
    
        // 追加任务4
        [NSThread sleepForTimeInterval:2];            // 模拟耗时操作
        NSLog(@"4--%@", [NSThread currentThread]);    // 打印当前线程
    });
}

result:
Insert image description here

We can see that in the serial queue, whether it is synchronous execution or asynchronous execution, they are all queued up and executed one by one in order.

Synchronization fence + single parallel queue:

- (void)syncBarrierAndOneConcurrent {
    
    
    dispatch_queue_t queue =  dispatch_queue_create("net.testQuquq", DISPATCH_QUEUE_CONCURRENT);
    
    dispatch_async(queue, ^{
    
    
        // 追加任务1
        [NSThread sleepForTimeInterval:2];             // 模拟耗时操作
        NSLog(@"1--%@", [NSThread currentThread]);     // 打印当前线程
    });
    dispatch_async(queue, ^{
    
    
        // 追加任务2
        [NSThread sleepForTimeInterval:2];             // 模拟耗时操作
        NSLog(@"2--%@", [NSThread currentThread]);     // 打印当前线程
    });
    
    dispatch_barrier_sync(queue, ^{
    
    
        // 追加barrier
        [NSThread sleepForTimeInterval:2];             // 模拟耗时操作
        NSLog(@"barrier--%@", [NSThread currentThread]);     // 打印当前线程
    });
    
    dispatch_async(queue, ^{
    
    
        // 追加任务3
        [NSThread sleepForTimeInterval:2];             // 模拟耗时操作
        NSLog(@"3--%@", [NSThread currentThread]);     // 打印当前线程
    });
    dispatch_async(queue, ^{
    
    
        // 追加任务4
        [NSThread sleepForTimeInterval:2];             // 模拟耗时操作
        NSLog(@"4--%@", [NSThread currentThread]);     // 打印当前线程
    });
}

operation result:

Insert image description here

The actual running result is the task group in front of the fence (that is, Task 1 and Task 2). The results are printed at the same time two seconds after the program starts executing. Then the method in the fence is executed separately in the next two seconds. In the last two seconds, the results are printed simultaneously. The task group after the fence (that is, Task 3 and Task 4) is executed, and since the tasks in the task groups before and after the fence are all executed asynchronously in the parallel queue, the order in which the execution ends is uncertain.

Asynchronous fence + multiple serial queues:

- (void)asyncBarrierAndSerials {
    
    
    dispatch_queue_t queue1 = dispatch_queue_create("net.testQueue1", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queue2 = dispatch_queue_create("net.testQueue2", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queue3 = dispatch_queue_create("net.testQueue3", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queue4 = dispatch_queue_create("net.testQueue4", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queue5 = dispatch_queue_create("net.testQueue5", DISPATCH_QUEUE_SERIAL);

    dispatch_async(queue1, ^{
    
    
        // 追加任务1
        [NSThread sleepForTimeInterval:2];           // 模拟耗时操作
        NSLog(@"1---%@", [NSThread currentThread]);  // 打印当前线程
    });

    dispatch_async(queue2, ^{
    
    
        // 追加任务2
        [NSThread sleepForTimeInterval:2];           // 模拟耗时操作
        NSLog(@"2---%@", [NSThread currentThread]);  // 打印当前线程
    });

    dispatch_barrier_async(queue3, ^{
    
    
        // 追加任务 barrier
        [NSThread sleepForTimeInterval:2];           // 模拟耗时操作
        NSLog(@"barrier---%@", [NSThread currentThread]);  // 打印当前线程
    });
    
    dispatch_async(queue4, ^{
    
    
        // 追加任务4
        [NSThread sleepForTimeInterval:2];           // 模拟耗时操作
        NSLog(@"4---%@", [NSThread currentThread]);  // 打印当前线程
    });

    dispatch_async(queue5, ^{
    
    
        // 追加任务5
        [NSThread sleepForTimeInterval:2];           // 模拟耗时操作
        NSLog(@"5---%@", [NSThread currentThread]);  // 打印当前线程
    });
}

result:
Insert image description here

In the case of asynchronous fence + multiple serial queues, each task is executed almost simultaneously, and the end time of the execution of the five tasks is completely random. At this time, the fence loses its meaning.

Asynchronous fence + multiple parallel queues:

In the case of asynchronous fence + multiple serial queues, the execution end time of each task is completely random, so it is conceivable that asynchronous fence + multiple parallel queues are completely random.

- (void) asyncBarrierAndConcurrents {
    
    
    dispatch_queue_t queueFirst = dispatch_queue_create("net.testQueueFirst", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueSecond = dispatch_queue_create("net.testQueueSecond", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueThird = dispatch_queue_create("net.testQueueThird", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueFourth = dispatch_queue_create("net.testQueueFourth", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueFifth = dispatch_queue_create("net.testQueueFifth", DISPATCH_QUEUE_CONCURRENT);
    
    
        dispatch_async(queueFirst, ^{
    
    
            // 追加任务 1
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"1---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queueSecond, ^{
    
    
            // 追加任务 2
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"2---%@",[NSThread currentThread]);      // 打印当前线程
        });
        
        dispatch_barrier_async(queueThird, ^{
    
    
            // 追加任务 barrier
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"barrier---%@",[NSThread currentThread]);// 打印当前线程
        });
        
        dispatch_async(queueFourth, ^{
    
    
            // 追加任务 3
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"3---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queueFifth, ^{
    
    
            // 追加任务 4
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"4---%@",[NSThread currentThread]);      // 打印当前线程
        });
}

result:
Insert image description here

Synchronization fence + multiple serial queues:

- (void) syncBarrierAndSerials {
    
    
    dispatch_queue_t queueFirst = dispatch_queue_create("net.testQueueFirst", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queueSecond = dispatch_queue_create("net.testQueueSecond", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queueThird = dispatch_queue_create("net.testQueueThird", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queueFourth = dispatch_queue_create("net.testQueueFourth", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t queueFifth = dispatch_queue_create("net.testQueueFifth", DISPATCH_QUEUE_SERIAL);
    
    
        dispatch_async(queueFirst, ^{
    
    
            // 追加任务 1
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"1---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queueSecond, ^{
    
    
            // 追加任务 2
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"2---%@",[NSThread currentThread]);      // 打印当前线程
        });
        
        dispatch_barrier_sync(queueThird, ^{
    
    
            // 追加任务 barrier
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"barrier---%@",[NSThread currentThread]);// 打印当前线程
        });
        
        dispatch_async(queueFourth, ^{
    
    
            // 追加任务 3
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"3---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queueFifth, ^{
    
    
            // 追加任务 4
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"4---%@",[NSThread currentThread]);      // 打印当前线程
        });
}

result:
Insert image description here

In this case, the fence, task 1 and task 2 are executed almost at the same time and the results are output first (and each fence is the first to output the results), but because the synchronized fence occupies the main thread, the result after the fence is Tasks 3 and 4 can only wait until the tasks in the fence are completed before starting to execute them.

Synchronization fence + multiple parallel queues:

- (void) syncBarrierAndConcurrents {
    
    
    dispatch_queue_t queueFirst = dispatch_queue_create("net.testQueueFirst", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueSecond = dispatch_queue_create("net.testQueueSecond", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueThird = dispatch_queue_create("net.testQueueThird", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueFourth = dispatch_queue_create("net.testQueueFourth", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t queueFifth = dispatch_queue_create("net.testQueueFifth", DISPATCH_QUEUE_CONCURRENT);
    
    
        dispatch_async(queueFirst, ^{
    
    
            // 追加任务 1
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"1---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queueSecond, ^{
    
    
            // 追加任务 2
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"2---%@",[NSThread currentThread]);      // 打印当前线程
        });
        
        dispatch_barrier_sync(queueThird, ^{
    
    
            // 追加任务 barrier
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"barrier---%@",[NSThread currentThread]);// 打印当前线程
        });
        
        dispatch_async(queueFourth, ^{
    
    
            // 追加任务 3
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"3---%@",[NSThread currentThread]);      // 打印当前线程
        });
        dispatch_async(queueFifth, ^{
    
    
            // 追加任务 4
            [NSThread sleepForTimeInterval:2];              // 模拟耗时操作
            NSLog(@"4---%@",[NSThread currentThread]);      // 打印当前线程
        });
}

result:
Insert image description here

During the actual operation, task 1, task 2 and the fence all start to execute at the same time, and the time when the execution of the three ends is uncertain. However, because the fence occupies the main thread, tasks 3 and 4 can only wait until Execution begins after the fence execution is completed.

Delayed execution method: dispatch_after

We often encounter the need to execute a task after a specified time (for example, 3 seconds). This situation can be achieved using the method GCD.dispatch_after

It should be noted that dispatch_afterthe method does not start processing after the specified time, but appends the task to the main queue after the specified time. Strictly speaking, this time is not absolutely accurate, but if you want to roughly delay the execution of a task, dispatch_afterthe method is very effective.

- (void)after {
    
    
    NSLog(@"currentThread---%@", [NSThread currentThread]);   // 打印当前线程
    NSLog(@"asyncMain---begin");
    
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
    
    
        // NSEC_PER_SEC是一个宏定义,通常用于表示一秒钟所包含的纳秒数。
        // 2.0 秒后异步追加任务代码到主队列,并开始执行
        NSLog(@"after---%@", [NSThread currentThread]);     // 打印当前线程
        NSLog(@"asyncMain---willEnd");
    });
}

result:
Insert image description here

The specific running situation is: it is printed first asyncMain---begin, and then after two seconds, the sum is printed in after---<_NSMainThread: 0x60000110c900>{number = 1, name = main}sequence asyncMain---willEnd.

GCD one-time code (executed only once): dispatch_once

We use the GCD dispatch_oncemethod when we create a singleton or have code that is only executed once during the entire program. Using the method can ensure that a certain piece of code is only executed once during the running of the program, and thread safety can be guaranteed dispatch_onceeven in a multi-threaded environment .dispatch_once

/**
 * 一次性代码(只执行一次)dispatch_once
 */
- (void)once {
    
    
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
    
    
        // 只执行 1 次的代码(这里面默认是线程安全的)
    });
}

GCD fast iteration method: dispatch_apply

Normally we would use forloop traversal, but GCDprovides us with a fast iteration method dispatch_apply. dispatch_applyAppends the specified task to the specified queue the specified number of times and waits for all queues to complete execution.

If used in a serial queue dispatch_apply, it will be forexecuted synchronously in sequence just like a loop. But this does not reflect the significance of rapid iteration.

We can use concurrent queues for asynchronous execution. For example, when traversing this0~5 number, the loop method is to take out one element at a time and traverse one by one. Multiple numbers can be iterated over simultaneously (asynchronously) in multiple threads.6fordispatch_apply

Another point is that whether it is in the serial queue or the concurrent queue, dispatch_applyit will wait for all tasks to be executed. This is like a synchronization operation, and it is also like a dispatch_group_waitmethod in a queue group.

- (void)apply {
    
    
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    
    NSLog(@"apply---begin");
    dispatch_apply(6, queue, ^(size_t iteration) {
    
    
        NSLog(@"%zd---%@", iteration, [NSThread currentThread]);
    });
    NSLog(@"apply---end");
}

The running results are as follows:
Insert image description here

Because tasks are executed asynchronously in a concurrent queue, the execution time of each task is variable, and the final ending order is also variable. But apply-end must be executed at the end. This is because the dispatch_apply method waits for all tasks to be completed.

GCD queue group: dispatch_group

Sometimes we have such a need: execute two time-consuming tasks asynchronously, and then return to the main thread to execute the task after the two time-consuming tasks are completed. At this time we can use GCD's queue group.

When calling a queue group, dispatch_group_asyncfirst put the task into the queue, and then put the queue into the queue group. Or use the combination of queue group dispatch_group_enterand call queue group to return to the specified thread to execute the task. Or use to return to the current thread and continue downward execution (which will block the current thread).dispatch_group_leavedispatch_group_async
dispatch_group_notifydispatch_group_wait

dispatch_group_notify

Monitor groupthe completion status of tasks in . When all tasks are completed, append tasks to groupand execute the tasks:

- (void)group {
    
    
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    dispatch_group_t group = dispatch_group_create();
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk0");
    });
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk1");
    });
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk2");
    });
    
    //dispatch_group_notify会等到group中的处理全部结束时再开始执行
     //在group中的处理全部结束时,将第三个参数(block)追加到第二个参数所对应的queue中
     dispatch_group_notify(group, dispatch_get_main_queue(), ^{
    
    
         NSLog(@"done");
     });
}

operation result:
Insert image description here

Since the execution result time of the queue added to the group is uncertain when multi-threaded concurrently, the order of printing is random (theoretically, this is true, but the execution order of tasks may be affected by the order of submission, especially when When multiple tasks are submitted to the same queue.).

dispatch_group_wait

In addition, we can also use dispatch_group_wait(group, DISPATCH_TIME_FOREVER);the second parameter as dispatch_time_tthe type, which can be customized to groupcomplete the waiting processing.

dispatch_group_waitUsed to pause the current thread (block the current thread) and wait for the specified grouptasks to be completed before continuing execution.

If we do not add dispatch_group_wait to wait, then since groupthe processing in is itself asynchronous, groupother tasks will be executed before the processing in is completed. Examples are as follows:

- (void)groupWait {
    
    
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    dispatch_group_t group = dispatch_group_create();
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk0");
    });
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk1");
    });
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk2");
    });
    
    NSLog(@"YES!!");
//    dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
}

result:

Insert image description here

You can see that the print YES!! operation has been executed before the processing in the group is completed.

And like this:

- (void)groupWait {
    
    
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    dispatch_group_t group = dispatch_group_create();
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk0");
    });
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk1");
    });
    
    dispatch_group_async(group, queue, ^{
    
    
        NSLog(@"blk2");
    });
    
    dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
    NSLog(@"YES!!");
}

result:
Insert image description here

You can see that the printing operation groupis performed after all the processing inYES!!

It can be seen from dispatch_group_waitthe output results of the relevant code execution: After all tasks are completed, the dispatch_group_waitsubsequent operations are executed. However, using dispatch_group_waitwill block the current thread!

dispatch_group_enter、dispatch_group_leave

dispatch_group_enterMarks that a task is appended to groupand executed once, which is equivalent to groupthe number of unfinished tasks in + 1.
dispatch_group_leaveMarks a task leaving group, executed once, which is equivalent to groupthe number of unfinished tasks in - 1. The
number of unfinished tasks in is 0. groupOnly when will be unblocked dispatch_group_waitand perform the dispatch_group_notifytasks
appended todispatch_group_enterdispatch_group_leavegroup

- (void)groupWithEnterAndLeave {
    
    
    // 首先 需要创建一个线程组
    dispatch_group_t group = dispatch_group_create();
    // 任务1
    dispatch_group_enter(group);
    void (^blockFirst)(int) = ^(int a){
    
    
        NSLog(@"任务%d完成!", a);
        dispatch_group_leave(group);
    };
    
    blockFirst(1);
    
    // 任务2
    dispatch_group_enter(group);
    void (^blockSecond)(int) = ^(int a){
    
    
        NSLog(@"任务%d完成!", a);
        dispatch_group_leave(group);
    };
    
    blockSecond(2);
    
    // 全部完成
    dispatch_group_notify(group, dispatch_get_main_queue(), ^(){
    
    
        NSLog(@"全部完成");
    });
}

result:
Insert image description here

We can see that only after task 1 and task 2 are completed, all completed tasks will be executed.
It can be seen from the running results of the relevant codes: the tasks in are only executed after all tasks are completed dispatch_group_enter. The combination here is actually equivalent to .dispatch_group_leavedispatch_group_notifydispatch_group_enter、dispatch_group_leavedispatch_group_async

However, using dispatch_group_enterand dispatch_group_leaveneeds to appear in pairs.

If dispatch_group_leavethere are more calls to than dispatch_group_enterthere are calls to , the program will crash.

GCD semaphore: dispatch_semaphore

GCDThe semaphore in refers to Dispatch Semaphorea signal that holds a count. Similar to the railings at a highway toll booth. When it can pass, open the railing; when it cannot pass, close the railing. In Dispatch Semaphore, count is used to complete this function. When the count is less than 0, you need to wait and cannot pass. When the count is 0 or greater than 0, it can pass without waiting. When the count is greater than 0 and the count decreases by 1, there is no need to wait and it can pass. Dispatch SemaphoreThree methods are provided:

  • dispatch_semaphore_create:Create a Semaphoreand initialize the total amount of signals.
  • dispatch_semaphore_signal: Send a signal and let the total amount of the signal be added 1.
  • dispatch_semaphore_wait: You can reduce the total semaphore amount by 1. When the total semaphore amount is less than 0, it will keep waiting (blocking the thread), otherwise it can execute normally.

Note: The premise for using a semaphore is to figure out which thread you need to wait for (blocked) and which thread you want to continue executing, and then use the semaphore.

Dispatch SemaphoreIn actual development, it is mainly used for:

  • Keep threads synchronized and convert asynchronous execution tasks into synchronous execution tasks.
  • Ensure thread safety and lock threads.

Dispatch SemaphoreThread synchronization
During development, we will encounter the need to execute time-consuming tasks asynchronously and use the results of asynchronous execution to perform some additional operations. In other words, it is equivalent to converting asynchronous execution tasks into synchronous execution tasks.

Next, we will use Dispatch Semaphorethread synchronization to convert one-step execution tasks into synchronous execution tasks:

- (void)semaphoreSync {
    
    
    NSLog(@"currentThread---%@", [NSThread currentThread]);      // 打印当前线程
    NSLog(@"semaphore---begin");
    
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
    dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
    
    __block int number = 0;
    dispatch_async(queue, ^{
    
    
        // 追加任务 1
        [NSThread sleepForTimeInterval:2];      // 模拟耗时操作
        NSLog(@"1---%@", [NSThread currentThread]);     // 打印当前线程
        
        number = 100;
        dispatch_semaphore_signal(semaphore);
    });
    
    dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
    NSLog(@"semaphore---end,number = %d", number);
}

result:
Insert image description here

You can see that it is printed after semaphore---endexecution . number = 100;And the output result numberis 100. The entire execution sequence is as follows:

semaphoreWhen initially created, the count is 0.
After asynchronous execution, task 1 is appended to the queue without waiting, and then dispatch_semaphore_waitthe method is executed, semaphoreminus 1. At this time semaphore == -1, the current thread enters the waiting state (the following content is not executed, only the task 1 we added is executed) , dispatch_semaphore_signalthe thread will not resume normal operation until the operation makes the semaphore count ``>=0)
Then, asynchronous task 1 begins to execute. After task 1 is executed dispatch_semaphore_signal, the total semaphore is increased by 1. At this time semaphore == 0, the blocked thread (main thread) resumes execution and
finally prints semaphore---end,number = 100
. This achieves thread synchronization and converts asynchronous execution tasks into synchronous execution tasks.

Dispatch Semaphore thread safety and thread synchronization (locking threads)

Thread safety: If the process where your code is located has multiple threads running at the same time, these threads may run this code at the same time. If the results of each run are the same as those of single-threaded runs, and the values ​​of other variables are the same as expected, it is thread-safe.

If there are only read operations for global variables and static variables in each thread, but no write operations, generally speaking, this global variable is thread-safe; if multiple threads perform write operations (change variables) at the same time, generally need to consider Thread synchronization, otherwise thread safety may be affected.

Thread synchronization: It can be understood that thread A and thread B work together. When thread A executes to a certain extent, it depends on a certain result of thread B, so it stops and signals thread B to run; thread B executes as instructed, and then gives the result to the thread A; Thread A continues the operation.

A simple example is: two people chatting together. Two people cannot speak at the same time to avoid unclear hearing (operation conflict). Wait for one person to finish speaking (one thread ends the operation), and then the other person speaks (another thread starts the operation again).

Next, we simulate the way of selling train tickets, implement NSThread thread safety and solve thread synchronization problems (example borrowed from: Big Brother Blog).

Scene: There are a total of 50 train tickets, and there are two train ticket sales windows, one is the Beijing train ticket sales window, and the other is the Shanghai train ticket sales window. Train tickets are sold at both windows at the same time, while supplies last.

Not thread safe (does not use semaphore)

Let’s first look at code that does not consider thread safety:

@interface ViewController ()

@property (nonatomic, assign) NSInteger ticketSurplusCount;

@end



/**
 * 非线程安全:不使用 semaphore
 * 初始化火车票数量、卖票窗口(非线程安全)、并开始卖票
 */
- (void)initTicketStatusNotSafe {
    
    
    NSLog(@"currentThread---%@",[NSThread currentThread]);  // 打印当前线程
    NSLog(@"semaphore---begin");
    
    self.ticketSurplusCount = 50;
    
    // queue1 代表北京火车票售卖窗口
    dispatch_queue_t queue1 = dispatch_queue_create("net.bujige.testQueue1", DISPATCH_QUEUE_SERIAL);
    // queue2 代表上海火车票售卖窗口
    dispatch_queue_t queue2 = dispatch_queue_create("net.bujige.testQueue2", DISPATCH_QUEUE_SERIAL);
    
    __weak typeof(self) weakSelf = self;
    dispatch_async(queue1, ^{
    
    
        [weakSelf saleTicketNotSafe];
    });
    
    dispatch_async(queue2, ^{
    
    
        [weakSelf saleTicketNotSafe];
    });
}

/**
 * 售卖火车票(非线程安全)
 */
- (void)saleTicketNotSafe {
    
    
    while (1) {
    
    
        
        if (self.ticketSurplusCount > 0) {
    
      // 如果还有票,继续售卖
            self.ticketSurplusCount--;
            NSLog(@"%@", [NSString stringWithFormat:@"剩余票数:%ld 窗口:%@", self.ticketSurplusCount, [NSThread currentThread]]);
            [NSThread sleepForTimeInterval:0.2];
        } else {
    
     // 如果已卖完,关闭售票窗口
            NSLog(@"所有火车票均已售完");
            break;
        }
        
    }
}

result:
Insert image description here

It can be seen that without considering thread safety and without using semaphore, the number of votes obtained will be chaotic, and the same ticket may be sold twice. This obviously does not meet our needs, so we need to consider thread safety issues.

Thread safety (using semaphore locking)

Consider thread-safe code:

@interface ViewController ()

@property (nonatomic, assign) NSInteger ticketSurplusCount;

@end


//创建一个全局信号量
dispatch_semaphore_t semaphoreLock;

/**
 * 线程安全:使用 semaphore 加锁
 * 初始化火车票数量、卖票窗口(线程安全)、并开始卖票
 */
- (void)initTicketStatusSafe {
    
    
    NSLog(@"currentThread---%@",[NSThread currentThread]);  // 打印当前线程
    NSLog(@"semaphore---begin");
    
    semaphoreLock = dispatch_semaphore_create(1);
    
    self.ticketSurplusCount = 50;
    
    // queue1 代表北京火车票售卖窗口
    dispatch_queue_t queue1 = dispatch_queue_create("net.bujige.testQueue1", DISPATCH_QUEUE_SERIAL);
    // queue2 代表上海火车票售卖窗口
    dispatch_queue_t queue2 = dispatch_queue_create("net.bujige.testQueue2", DISPATCH_QUEUE_SERIAL);
    
    __weak typeof(self) weakSelf = self;
    dispatch_async(queue1, ^{
    
    
        [weakSelf saleTicketSafe];
    });
    
    dispatch_async(queue2, ^{
    
    
        [weakSelf saleTicketSafe];
    });
}

/**
 * 售卖火车票(线程安全)
 */
- (void)saleTicketSafe {
    
    
    while (1) {
    
    
        // 相当于加锁
        dispatch_semaphore_wait(semaphoreLock, DISPATCH_TIME_FOREVER);
        
        if (self.ticketSurplusCount > 0) {
    
      // 如果还有票,继续售卖
            self.ticketSurplusCount--;
            NSLog(@"%@", [NSString stringWithFormat:@"剩余票数:%ld 窗口:%@", self.ticketSurplusCount, [NSThread currentThread]]);
            [NSThread sleepForTimeInterval:0.2];
        } else {
    
     // 如果已卖完,关闭售票窗口
            NSLog(@"所有火车票均已售完");
            
            // 相当于解锁
            dispatch_semaphore_signal(semaphoreLock);
            break;
        }
        
        // 相当于解锁
        dispatch_semaphore_signal(semaphoreLock);
    }
}

result:
Insert image description here

Idea : Here we use dispatch_semaphorethe mechanism. The operation of buying tickets is executed asynchronously every time. However, if the first ticket has not been sold yet and the second ticket has started to be sold, the dispatch_semaphore_waitsemaphore count will be caused by the operation =- 1. The thread will enter the waiting state, waiting for the operation after the first ticket is sold dispatch_semaphore_signal. This operation will make the semaphore count = 1, causing the thread to rewrite and start running normally, and start the normal processing of selling the second ticket. By analogy, each ticket sale is protected to ensure the correctness of the entire ticket sale process.

It can be seen that after considering thread safety and using dispatch_semaphorethe mechanism, the number of votes obtained is correct and there is no confusion. We also solved the problem of multiple thread synchronization.

Guess you like

Origin blog.csdn.net/m0_63852285/article/details/132029952