humancode.us

Using GCD Queues For Synchronization

August 2, 2014

GCD Logo

This is the second post in a series about Grand Central Dispatch.

In my previous post, we learned that race conditions are a constant problem for asynchronous programs. In short, these are situations where data is simultaneously operated on by multiple threads, resulting in sometimes unpredictable results.

A classic way of solving this issue is by using a mutual exclusion (mutex) object. Here’s an example using the Posix threads API:

#import <Foundation/Foundation.h>
#import <pthread/pthread.h>

// Each player gets 100 gems to start
int playerAGems = 100;
int playerBGems = 100;

// Data structure to hold information about the mutual exclusion object
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;

void *thread1(void *arg) {
    // Move 20 gems from player A to player B
    pthread_mutex_lock(&mutex); // Wait until we gain access to the mutex
    playerAGems -= 20;
    playerBGems += 20;
    NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    pthread_mutex_unlock(&mutex); // Unlock the mutex
    return NULL;
}

void *thread2(void *arg) {
    // Move 50 gems from player B to player A
    pthread_mutex_lock(&mutex);
    playerAGems += 50;
    playerBGems -= 50;
    NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    pthread_mutex_unlock(&mutex);
    return NULL;
}

int main() {
    pthread_mutex_init(&mutex, NULL); // Initialize the mutex
    pthread_t t1; // Data structure to hold information about thread 1
    pthread_t t2; // Data structure to hold information about thread 2
    pthread_create(&t1, NULL, thread1, NULL);
    pthread_create(&t2, NULL, thread2, NULL);

    dispatch_main();
}

Build and run this program several times (you have to interrupt the program to stop it). Do the results match your expectations?

Comment out the pthread_mutex_lock() and pthread_mutex_unlock() statements and run the program again several times. What’s different?

In this example, thread1 and thread2 are running at the same time. Let’s say thread1 reaches the pthread_mutex_lock() statement first. The first thread “obtains a lock” on the mutual exclusion object, and continues running.

If thread2 reaches its pthread_mutex_lock() statement while the thread1 still owns the lock, thread2 will stop running (i.e. “block”, or “suspend”) and wait until the mutex is unlocked.

This behavior allows thread1 to modify both playerAGems and playerBGems atomically, that is, without someone else mucking around in between. When thread1 is done, it calls pthread_mutex_unlock(), which then allows thread2 to continue running with it holding the lock on the mutex.

The unlocking of the mutex by one thread, and the locking of that mutex by the next thread in line, is also atomic. In other words, the mutex never enters an in-between “unlocked” state. The lock is transferred directly from one thread to the next.

Queues, not mutexes

Let’s rewrite this example using GCD. Here we go:

#import <Foundation/Foundation.h>
#import <pthread/pthread.h>

// Each player gets 100 gems to start
int playerAGems = 100;
int playerBGems = 100;

dispatch_queue_t queue;

void *thread1(void *arg) {
    // Move 20 gems from player A to player B
    dispatch_sync(queue, ^{
        playerAGems -= 20;
        playerBGems += 20;
        NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    });
    return NULL;
}

void *thread2(void *arg) {
    // Move 50 gems from player B to player A
    dispatch_sync(queue, ^{
        playerAGems += 50;
        playerBGems -= 50;
        NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    });
    return NULL;
}

int main() {
    queue = dispatch_queue_create("Synchronization queue", DISPATCH_QUEUE_SERIAL);
    pthread_t t1, t2;
    pthread_create(&t1, NULL, thread1, NULL);
    pthread_create(&t2, NULL, thread2, NULL);

    dispatch_main();
}

In this example, we continue using pthread_create() to create new threads. In a later installment, I’ll show how you can use the global concurrent queues to get the same result.

Notice the similarities and differences? Instead of using a mutex, we use a serial queue, but the logic remains very similar. Recall that a serial queue is a data structure that lets you enqueue blocks that will be run one after another.

Instead of calling pthread_mutex_lock(), we use dispatch_sync() to block our thread on the queue. dispatch_sync() tells GCD to enqueue our block, and wait until it’s done executing.

Let’s say thread1 reaches its dispatch_sync() statement first. Its block (taking 20 gems away from player A, and giving 20 gems to player B) is enqueued on our serial queue, and its execution is suspended until the block has finished executing. Because the queue is empty at this time, the block we just enqueued begins execution immediately.

The question “what thread is that enqueued block running on?” is generally meaningless, because GCD will figure out what thread to commandeer to run your enqueued blocks.

Let’s say thread2 reaches its dispatch_sync() statement while the previous block is executing. Its block (giving 50 gems to player A, and taking 50 gems from player B) is enqueued on the serial queue, “behind” the block that’s currently executing, where it sits and waits for its turn. thread2, because it’s waiting for its block to finish running, is suspended.

The running block eventually finishes, and gets popped off the head of the queue, causing thread1 to resume execution. Since the block from thread2 is now at the head of the queue, it executes. When it completes, thread2 will resume execution.

Congratulations! You’ve learned how to replace mutexes with serial queues. In the next installment, we’ll learn about concurrent queues, a powerful ally.