(Transfer) Introduction to GCD (1): Basic Concepts and Dispatch Queue

 Reprinted from: http://www.dreamingwish.com/article/grand-central-dispatch-basic-1.html

 

What is GCD?

Grand Central Dispatch, or GCD, is a set of low-level APIs that provide a new approach to concurrent programming. In terms of basic functions, GCD is a bit like NSOperationQueue, both of which allow programs to split tasks into multiple single tasks and submit them to a work queue for concurrent or serial execution. GCD is more low-level and more efficient than NSOperationQueue, and it is not part of the Cocoa framework.

In addition to the parallel execution of code, GCD also provides a highly integrated event control system. Handles can be set to respond to file descriptors, mach ports ( Mach ports  are used for interprocess communication on OS X), processes, timers, signals, user-generated events. These handles are executed concurrently through GCD.

GCD's API is largely based on blocks. Of course, GCD can also be used without blocks, such as using the traditional c mechanism to provide function pointers and context pointers. Practice has proved that when used with block, GCD is very easy to use and can exert its maximum capabilities.

You can get GCD documentation by typing "man dispatch" on a Mac.

Why use?

GCD offers many advantages over traditional multithreaded programming:

  1. Ease of use:  GCD is easier to use than threads. Because GCD is based on work units rather than operations like threads, GCD can control tasks such as waiting for tasks to end , monitoring file descriptors , executing code periodically, and suspending work . The block-based lineage makes it extremely easy to pass context between different code scopes.
  2. Efficiency:  GCD is implemented so lightly and elegantly that in many places it is more practical and faster than specifically creating resource-consuming threads. It's about ease of use: part of what makes GCD easy to use is that you can just use it without worrying too much about efficiency.
  3. Performance:  GCD automatically increases or decreases the number of threads according to system load, which reduces context switching and increases computational efficiency.

Dispatch Objects

Although GCD is pure C, it is composed in an object-oriented style. GCD objects are called dispatch objects. Dispatch objects are reference counted like Cocoa objects. Use the dispatch_release and dispatch_retain functions to manipulate the reference count of the dispatch object for memory management. But the idea is that unlike Cocoa objects, dispatch objects do not participate in the garbage collection system, so even if GC is enabled, you must manually manage the memory of GCD objects.

Dispatch queues and dispatch sources (described later) can be suspended and resumed, can have an associated arbitrary context pointer, and can have an associated task completion trigger function. See "man dispatch_object" for more information on these functions.

Dispatch Queues

The basic concept of GCD is the dispatch queue. A dispatch queue is an object that accepts tasks and executes them in a first-come, first-served order. Dispatch queues can be concurrent or serial. Concurrent tasks will be appropriately concurrent based on system load like NSOperationQueue, and serial queues only execute a single task at a time.

There are three queue types in GCD:

  1. The main queue:  the same function as the main thread. In fact, tasks submitted to the main queue are executed in the main thread. The main queue can be obtained by calling dispatch_get_main_queue(). Because the main queue is related to the main thread, this is a serial queue.
  2. Global queues:  Global queues are concurrent queues and are shared by the entire process. There are three global queues in the process: high, medium (default), and low priority queues. You can call the dispatch_get_global_queue function to pass in the priority to access the queue.
  3. User queue:  User queue (GCD does not call this kind of queue, but there is no specific name to describe this kind of queue, so we call it user queue) are queues  dispatch_queue_create created with functions. These queues are serial. Because of this, they can be used to implement a synchronization mechanism, kind of like a mutex in traditional threads.

Create queue

To use a user queue, we first have to create one. Just call the function dispatch_queue_create. The first parameter of the function is a label, which is purely for debugging. Apple recommends that we use an inverted domain name to name our queues, such as "com.dreamingwish.subsystem.task". These names are displayed in the crash log and can also be called by the debugger, which can be useful in debugging. The second parameter is not currently supported, just pass in NULL.

Submit Job

Submitting a job to a queue is simple: call the dispatch_async function, passing in a queue and a block. The queue executes the block's code when it is the block's turn to execute. The following example is one that executes a huge task in the background:

1
2
3
4
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0 ), ^{
         [ self goDoSomethingLongAndInvolved ];
         NSLog ( @ "Done doing something long and involved" );
});

dispatch_async The function returns immediately, and the block executes asynchronously in the background. 

Of course, in general, simply NSLog messages when a task completes isn't a thing. In a typical Cocoa program, you will most likely want to update the interface when the task is complete, which means executing some code in the main thread. You can do this simply - use nested dispatch, execute background tasks in the outer layer, and dispatch the tasks to the main queue in the inner layer:

1
2
3
4
5
6
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0 ), ^{
         [ self goDoSomethingLongAndInvolved ];
         dispatch_async(dispatch_get_main_queue(), ^{
             [textField setStringValue : @ "Done doing something long and involved" ];
         });
});

There is also a function called dispatch_sync, which does the same thing as dispatch_async, but it waits for the code in the block to complete and returns. Combined with the __block type modifier, it can be used to get a value from an executing block. For example, you might have a piece of code executing in the background that needs to get a value from the UI control layer. Then you can simply do it with dispatch_sync:

1
2
3
4
5
6
7
8
__block NSString *stringValue;
dispatch_sync(dispatch_get_main_queue(), ^{
         // __block variables aren't automatically retained
         // so we'd better make sure we have a reference we can keep
         stringValue = [[textField stringValue ] copy ];
});
[stringValue autorelease ];
// use stringValue in the background now

We can also do this in a better way - using a more "async" style. Instead of blocking the background thread when fetching the value of the interface layer, you can use a nested block to abort the background thread, then get the value from the main thread, and then submit post processing to the background thread:

    dispatch_queue_t bgQueue = myQueue;
    dispatch_async(dispatch_get_main_queue(), ^{
        NSString *stringValue = [[[textField stringValue] copy] autorelease];
        dispatch_async(bgQueue, ^{
            // use stringValue in the background now
        });
    });

Depending on your needs, myQueue can be a user queue or a global queue.

 

Lock is no longer used

User queues can be used instead of locks to complete the synchronization mechanism. In traditional multithreaded programming, you may have an object to be used by multiple threads, and you need a lock to protect this object:

    NSLock *lock;

The access code would look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
- ( id )something
{
     id localSomething;
     [lock lock ];
     localSomething = [[something retain ] autorelease ];
     [lock unlock ];
     return localSomething;
}
 
- ( void )setSomething:( id )newSomething
{
     [lock lock ];
     if (newSomething != something)
     {
         [something release ];
         something = [newSomething retain ];
         [ self updateSomethingCaches ];
     }
     [lock unlock ];
}

With GCD, you can use queue instead:

    dispatch_queue_t queue;

To be used for the synchronization mechanism, the queue must be a user queue (as of OS X v10.7 and iOS 4.3, it must also be specified as DISPATCH_QUEUE_SERIAL), not a global queue, so dispatch_queue_createinitialize one with. You can then encapsulate the access code for the shared data with dispatch_async or  :dispatch_sync

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
- ( id )something
{
     __block id localSomething;
     dispatch_sync(queue, ^{
         localSomething = [something retain ];
     });
     return [localSomething autorelease ];
}
 
- ( void )setSomething:( id )newSomething
{
     dispatch_async(queue, ^{
         if (newSomething != something)
         {
             [something release ];
             something = [newSomething retain ];
             [ self updateSomethingCaches ];
         }
     });
}

 It's worth noting that dispatch queues are very lightweight, so you can use them for special purposes, just like you used to use locks.

Now you might be asking, "That's great, but is it interesting? I just changed the code to do the same thing."

In fact, there are several benefits to using the GCD pathway:

  1. Parallel computing:  Note that in the second version of the code, -setSomething:是怎么使用dispatch_async的。调用 -setSomething:会立即返回,然后这一大堆工作会在后台执行。如果updateSomethingCaches是一个很费时费力的任务,且调用者将要进行一项处理器高负荷任务,那么这样做会很棒。
  2. 安全: 使用GCD,我们就不可能意外写出具有不成对Lock的代码。在常规Lock代码中,我们很可能在解锁之前让代码返回了。使用GCD,队列通常持续运行,你必将归还控制权。
  3. 控制: 使用GCD我们可以挂起和恢复dispatch queue,而这是基于锁的方法所不能实现的。我们还可以将一个用户队列指向另一个dspatch queue,使得这个用户队列继承那个dispatch queue的属性。使用这种方法,队列的优先级可以被调整——通过将该队列指向一个不同的全局队列,若有必要的话,这个队列甚至可以被用来在主线程上执行代码。
  4. 集成: GCD的事件系统与dispatch queue相集成。对象需要使用的任何事件或者计时器都可以从该对象的队列中指向,使得这些句柄可以自动在该队列上执行,从而使得句柄可以与对象自动同步。

总结

现在你已经知道了GCD的基本概念、怎样创建dispatch queue、怎样提交Job至dispatch queue以及怎样将队列用作线程同步。接下来我会向你展示如何使用GCD来编写平行执行代码来充分利用多核系统的性能^ ^。我还会讨论GCD更深层的东西,包括事件系统和queue targeting。

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326316247&siteId=291194637