Writing Thread-Safe Classes with GCD

September 13, 2014

GCD Logo

This is the fifth post in a series about Grand Central Dispatch.

So far, we’ve learned that a multi-threaded program means that access to our data structures must be protected by some kind of synchronization mechanism. We used GCD’s serial queues to gate access to our data structures.

We also discussed using concurrent queues to implement a Readers-Writer lock. To keep this post simple, we’ll stick to using serial queues.

One serious issue remains: whoever uses your data structures have to remember to use dispatch_sync(), otherwise you get errors. This is obviously a problem, especially when your code is used by someone not familiar with your arrangement (for instance, when your code is part of a framework).

Wouldn’t it be great if we could encapsulate the synchronization behavior into our data structure, so that its users don’t have to worry about asynchronous behavior?

Encapsulation, of course, is what classes are really good for. Instead of requiring synchronization code in our user’s codebase, we should make thread-safe classes.

Read more…

RIAA Music Revenue Share Over Time

August 24, 2014

I updated my old chart to include 2012-2013 data:

RIAA revenue share over time 1980-2013

GCD Target Queues

August 14, 2014

GCD Logo

This is the fourth post in a series about Grand Central Dispatch.

Come with me on a little detour, so we can take a look at a neat feature in GCD: target queues.

We begin our trip down this scenic byway by learning about a set of queues with very special properties: the global concurrent queues.

Global concurrent queues

GCD provides four global concurrent queues that are always available to your program. These queues are special, because they are automatically created by the library, can never be suspended, and treat barrier blocks like regular blocks. Because these queues are concurrent, all enqueued blocks will run in parallel.

Each of the four global concurrent queues has a different priority:


Blocks enqueued on a higher-priority queue will preempt blocks enqueued on a lower-priority queue.

These global concurrent queues play the role of thread priorities in GCD. Just like threads, it’s possible to use all CPU resources executing blocks on a high-priority queue and “starving” a lower-priority queue, preventing its enqueued blocks from executing at all.

You can get a reference to a global concurrent queue this way:

dispatch_queue_t defaultPriorityGlobalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

Read more…

GCD Concurrent Queues

August 6, 2014

GCD Logo

This is the third post in a series about Grand Central Dispatch.

If serial queues are a better replacement for mutexes (and more), then concurrent queues are a better replacement for threads (and more).

A concurrent queue allows you to enqueue blocks that will start running without waiting for the previous blocks to finish.

Run the following program several times:

#import <Foundation/Foundation.h>

void print(int number) {
    for (int count = 0; count < 10; ++count) {
        NSLog(@"%d", number);

int main(int argc, const char * argv[]) {
    dispatch_queue_t queue = dispatch_queue_create("My concurrent queue", DISPATCH_QUEUE_CONCURRENT);

    @autoreleasepool {
        for (int index = 0; index < 5; ++index) {
            dispatch_async(queue, ^{
    return 0;

dispatch_async() tells GCD to enqueue a block, but not to wait until the block is done before moving on. This allows us to quickly “toss” five blocks onto the concurrent queue we just created.

When the first block is enqueued, the queue is empty, so it begins running the same way it would if the queue were serial. However, when the second block is enqueued, it too will run even though the first block hasn’t finished running. The same goes for the third block, the fourth, and the fifth—they all run at the same time.

Read more…

Using GCD Queues For Synchronization

August 2, 2014

GCD Logo

This is the second post in a series about Grand Central Dispatch.

In my previous post, we learned that race conditions are a constant problem for asynchronous programs. In short, these are situations where data is simultaneously operated on by multiple threads, resulting in sometimes unpredictable results.

A classic way of solving this issue is by using a mutual exclusion (mutex) object. Here’s an example using the Posix threads API:

#import <Foundation/Foundation.h>
#import <pthread/pthread.h>

// Each player gets 100 gems to start
int playerAGems = 100;
int playerBGems = 100;

// Data structure to hold information about the mutual exclusion object
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;

void *thread1(void *arg) {
    // Move 20 gems from player A to player B
    pthread_mutex_lock(&mutex); // Wait until we gain access to the mutex
    playerAGems -= 20;
    playerBGems += 20;
    NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    pthread_mutex_unlock(&mutex); // Unlock the mutex
    return NULL;

void *thread2(void *arg) {
    // Move 50 gems from player B to player A
    playerAGems += 50;
    playerBGems -= 50;
    NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    return NULL;

int main() {
    pthread_mutex_init(&mutex, NULL); // Initialize the mutex
    pthread_t t1; // Data structure to hold information about thread 1
    pthread_t t2; // Data structure to hold information about thread 2
    pthread_create(&t1, NULL, thread1, NULL);
    pthread_create(&t2, NULL, thread2, NULL);


Read more…

Newer Posts