humancode.us

GCD Target Queues

August 14, 2014

GCD Logo

This is the fourth post in a series about Grand Central Dispatch.

Come with me on a little detour, so we can take a look at a neat feature in GCD: target queues.

We begin our trip down this scenic byway by learning about a set of queues with very special properties: the global concurrent queues.

Global concurrent queues

GCD provides four global concurrent queues that are always available to your program. These queues are special, because they are automatically created by the library, can never be suspended, and treat barrier blocks like regular blocks. Because these queues are concurrent, all enqueued blocks will run in parallel.

Each of the four global concurrent queues has a different priority:

  • DISPATCH_QUEUE_PRIORITY_HIGH
  • DISPATCH_QUEUE_PRIORITY_DEFAULT
  • DISPATCH_QUEUE_PRIORITY_LOW
  • DISPATCH_QUEUE_PRIORITY_BACKGROUND

Blocks enqueued on a higher-priority queue will preempt blocks enqueued on a lower-priority queue.

These global concurrent queues play the role of thread priorities in GCD. Just like threads, it’s possible to use all CPU resources executing blocks on a high-priority queue and “starving” a lower-priority queue, preventing its enqueued blocks from executing at all.

You can get a reference to a global concurrent queue this way:

dispatch_queue_t defaultPriorityGlobalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

Target queues

So how do you start using these global concurrent queues? Surprise: you’re already using them! Any queue you create must have a target queue. By default, this is set to the DISPATCH_QUEUE_PRIORITY_DEFAULT global concurrent queue.

What does it mean for a queue to have a target queue? It’s a little surprising, actually: each time an enqueued block becomes ready to execute, the queue will re-enqueue that block on the target queue for actual execution.

But wait a minute—haven’t we always assumed that blocks executed on the queue they’re on? Has everything been a total lie?

Not really. Since all new queues target the default-priority global concurrent queue by default, any block that becomes ready to execute on one of your queues will basically execute immediately. For all intents and purposes, unless you change your queue’s target queue, blocks essentially “run in your queue”.

Your queue inherits the priority of its target queue. Setting your queue’s target queue to one of the higher- or lower-priority global concurrent queue effectively changes your queue’s priority.

Only the global concurrent queues and the main queue get to execute blocks. All other queues must (eventually) target one of these special queues.

Party time with target queues

Let’s play with an example.

Generations ago, many of our (great-)grandparents’ telephones were connected to party lines. This is an arrangement where all the phones in a community were wired up to a single loop, and anyone picking up the phone would hear what everyone else on the line was saying.

Let’s say we have two small groups of people, living in two houses, connected by a party line: house1Folks and house2Folks. The folks in House 1 love to call the folks in House 2! Problem is, no-one actually checks to see if anyone else was using the phone before they started calling. Take a look:

// Party line!

#import <Foundation/Foundation.h>

void makeCall(dispatch_queue_t queue, NSString *caller, NSArray *callees) {
    // Randomly call someone
    NSInteger targetIndex = arc4random() % callees.count;
    NSString *callee = callees[targetIndex];

    NSLog(@"%@ is calling %@...", caller, callee);
    sleep(1);
    NSLog(@"...%@ is done calling %@.", caller, callee);

    // Wait some random time and call again
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (arc4random() % 1000) * NSEC_PER_MSEC), queue, ^{
        makeCall(queue, caller, callees);
    });
}

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        NSArray *house1Folks = @[@"Joe", @"Jack", @"Jill"];
        NSArray *house2Folks = @[@"Irma", @"Irene", @"Ian"];

        dispatch_queue_t house1Queue = dispatch_queue_create("house 1", DISPATCH_QUEUE_CONCURRENT);

        for (NSString *caller in house1Folks) {
            dispatch_async(house1Queue, ^{
                makeCall(house1Queue, caller, house2Folks);
            });
        }
    }
    dispatch_main();
    return 0;
}

Run the program and see what happens:

Jack is calling Ian...
...Jack is done calling Ian.
Jill is calling Ian...
Joe is calling Ian...
...Jill is done calling Ian.
...Joe is done calling Ian.
Jack is calling Irene...
...Jack is done calling Irene.
Jill is calling Irma...
Joe is calling Ian...

How rude! Calls are being made right on top of each other, without waiting for the last conversation to end. Let’s see if we can sort this out. Create a serial queue and set it as house1Queue’s target queue.

// ...

int main(int argc, const char * argv[]) {
    @autoreleasepool {

        NSArray *house1Folks = @[@"Joe", @"Jack", @"Jill"];
        NSArray *house2Folks = @[@"Irma", @"Irene", @"Ian"];

        dispatch_queue_t house1Queue = dispatch_queue_create("house 1", DISPATCH_QUEUE_CONCURRENT);

        // Set the target queue
        dispatch_queue_t partyLine = dispatch_queue_create("party line", DISPATCH_QUEUE_SERIAL);
        dispatch_set_target_queue(house1Queue, partyLine);

        for (NSString *caller in house1Folks) {
            dispatch_async(house1Queue, ^{
                makeCall(house1Queue, caller, house2Folks);
            });
        }
    }
    dispatch_main();
    return 0;
}

Here’s the result:

Joe is calling Ian...
...Joe is done calling Ian.
Jack is calling Irma...
...Jack is done calling Irma.
Jill is calling Irma...
...Jill is done calling Irma.
Joe is calling Irma...
...Joe is done calling Irma.
Jack is calling Irene...
...Jack is done calling Irene.

Much better!

It may not be immediately apparent, but a concurrent queue actually executes enqueued blocks in FIFO (first-in-first-out) order: the first block enqueued will be the first one to be executed. But a concurrent queue doesn’t wait for one block to finish before starting the following block, so the next enqueued block is started concurrently, then the next, and so on.

But we learned that a queue doesn’t actually run its own blocks, but rather re-enqueues blocks that are ready for execution on its target queue. When you set up a concurrent queue to target a serial queue, it will enqueue all its blocks—in FIFO order—onto the serial queue for execution. Since a serial queue will not execute a block until the previous block has finished running, the blocks that originally went on the concurrent queue are forced to run in a serial fashion. In essence, a serial target queue can serialize a concurrent queue.

house1Queue targets the queue partyLine, and partyLine targets the default-priority global concurrent queue by default, so blocks on house1Queue are re-enqueued on partyLine, and again on the global concurrent queue, where they finally execute.

It’s possible to create a loop out of target queues, where following a sequence of target queues will bring you full circle to the original queue. What happens if you do this is undefined, so don’t do it.

Multiple queues targeting a single queue

More than one queue can target the same queue. The folks in House 2 want in on the action too, so let’s create a queue for them, and set the partyLine queue as its target.

// Party line!

#import <Foundation/Foundation.h>

void makeCall(dispatch_queue_t queue, NSString *caller, NSArray *callees) {
    // Randomly call someone
    NSInteger targetIndex = arc4random() % callees.count;
    NSString *callee = callees[targetIndex];

    NSLog(@"%@ is calling %@...", caller, callee);
    sleep(1);
    NSLog(@"...%@ is done calling %@.", caller, callee);

    // Wait some random time and call again
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (arc4random() % 1000) * NSEC_PER_MSEC), queue, ^{
        makeCall(queue, caller, callees);
    });
}

int main(int argc, const char * argv[]) {
    @autoreleasepool {

        NSArray *house1Folks = @[@"Joe", @"Jack", @"Jill"];
        NSArray *house2Folks = @[@"Irma", @"Irene", @"Ian"];

        dispatch_queue_t house1Queue = dispatch_queue_create("house 1", DISPATCH_QUEUE_CONCURRENT);
        dispatch_queue_t house2Queue = dispatch_queue_create("house 2", DISPATCH_QUEUE_CONCURRENT);

        // Set the target queue for BOTH house queues
        dispatch_queue_t partyLine = dispatch_queue_create("party line", DISPATCH_QUEUE_SERIAL);
        dispatch_set_target_queue(house1Queue, partyLine);
        dispatch_set_target_queue(house2Queue, partyLine);

        for (NSString *caller in house1Folks) {
            dispatch_async(house1Queue, ^{
                makeCall(house1Queue, caller, house2Folks);
            });
        }
        for (NSString *caller in house2Folks) {
            dispatch_async(house2Queue, ^{
                makeCall(house2Queue, caller, house1Folks);
            });
        }
    }
    dispatch_main();
    return 0;
}

Run the program. What do you observe?

Because both concurrent queues target a single serial queue, blocks from both queues must execute one after another. The single target queue serializes blocks from both concurrent queues.

Remove the target queue assignment for one or both of the concurrent queues and see what happens. Does what you see match your expectations?

Target queues in the real world

Target queues can enable some elegant design patterns. In the examples above, we took one or more concurrent queues and serialized their behavior. Designating a serial queue as a target queue expresses the idea that you can only do one thing at a time, no matter how many separate threads of execution compete for the opportunity. That “one thing” can be a database request, access to a physical disk drive, or operation on some hardware resource.

Setting up a concurrent queue to target a serial queue can cause deadlocks if there are blocks that must run concurrently for the program to proceed. Use this pattern with caution.

Serial target queues are also important when you want to coordinate asynchronous events coming from a diverse set of sources, such as timers, network events, and file system activity. They are especially useful when you have to coordinate events coming from several objects from disparate frameworks, or when you can’t change a class’s source code. I’ll talk about timers and other event sources in a future post.

As my colleague Mike E. points out: setting up a concurrent queue to target a serial queue has no real-world application. I tend to agree: I’m hard-pressed to find an example where it’s not preferable to dispatch_async to a serial queue rather than setting a serial queue as the target of a concurrent queue.

Concurrent target queues give you a different kind of magical power: you can allow blocks to execute pretty much they way they always have, until you enqueue a barrier block. When you do, you stop all enqueued blocks from running until all currently-running blocks are done, and your barrier block has completed executing. It’s like hitting a master Pause button on several streams of operations so you can do some work before resuming execution.

Back on the trail

Here ends our little detour on target queues. I know it can be a little much to deal with if you’re just starting out with GCD. In fact, you can go a long way being blissfully ignorant of target queues. But one day, you’ll stumble upon a problem that you can elegantly solve with the deployment of a strategic target queue, and our little walk off the beaten path would have been worth it.

I hope it’s been an enjoyable stroll. I’ll see you next time, when I’ll talk about designing classes that work well with GCD.