humancode.us

RIAA Music Revenue Share Over Time

August 24, 2014

I updated my old chart to include 2012-2013 data:

RIAA revenue share over time 1980-2013

GCD Target Queues

August 14, 2014

GCD Logo

This is the fourth post in a series about Grand Central Dispatch.

Come with me on a little detour, so we can take a look at a neat feature in GCD: target queues.

We begin our trip down this scenic byway by learning about a set of queues with very special properties: the global concurrent queues.

Global concurrent queues

GCD provides four global concurrent queues that are always available to your program. These queues are special, because they are automatically created by the library, can never be suspended, and treat barrier blocks like regular blocks. Because these queues are concurrent, all enqueued blocks will run in parallel.

Each of the four global concurrent queues has a different priority:

  • DISPATCH_QUEUE_PRIORITY_HIGH
  • DISPATCH_QUEUE_PRIORITY_DEFAULT
  • DISPATCH_QUEUE_PRIORITY_LOW
  • DISPATCH_QUEUE_PRIORITY_BACKGROUND

Blocks enqueued on a higher-priority queue will preempt blocks enqueued on a lower-priority queue.

These global concurrent queues play the role of thread priorities in GCD. Just like threads, it’s possible to use all CPU resources executing blocks on a high-priority queue and “starving” a lower-priority queue, preventing its enqueued blocks from executing at all.

You can get a reference to a global concurrent queue this way:

dispatch_queue_t defaultPriorityGlobalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

Read more…

GCD Concurrent Queues

August 6, 2014

GCD Logo

This is the third post in a series about Grand Central Dispatch.

If serial queues are a better replacement for mutexes (and more), then concurrent queues are a better replacement for threads (and more).

A concurrent queue allows you to enqueue blocks that will start running without waiting for the previous blocks to finish.

Run the following program several times:

#import <Foundation/Foundation.h>

void print(int number) {
    for (int count = 0; count < 10; ++count) {
        NSLog(@"%d", number);
    }
}

int main(int argc, const char * argv[]) {
    dispatch_queue_t queue = dispatch_queue_create("My concurrent queue", DISPATCH_QUEUE_CONCURRENT);

    @autoreleasepool {
        for (int index = 0; index < 5; ++index) {
            dispatch_async(queue, ^{
                print(index);
            });
        }
    }
    dispatch_main();
    return 0;
}

dispatch_async() tells GCD to enqueue a block, but not to wait until the block is done before moving on. This allows us to quickly “toss” five blocks onto the concurrent queue we just created.

When the first block is enqueued, the queue is empty, so it begins running the same way it would if the queue were serial. However, when the second block is enqueued, it too will run even though the first block hasn’t finished running. The same goes for the third block, the fourth, and the fifth—they all run at the same time.

Read more…

Using GCD Queues For Synchronization

August 2, 2014

GCD Logo

This is the second post in a series about Grand Central Dispatch.

In my previous post, we learned that race conditions are a constant problem for asynchronous programs. In short, these are situations where data is simultaneously operated on by multiple threads, resulting in sometimes unpredictable results.

A classic way of solving this issue is by using a mutual exclusion (mutex) object. Here’s an example using the Posix threads API:

#import <Foundation/Foundation.h>
#import <pthread/pthread.h>

// Each player gets 100 gems to start
int playerAGems = 100;
int playerBGems = 100;

// Data structure to hold information about the mutual exclusion object
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;

void *thread1(void *arg) {
    // Move 20 gems from player A to player B
    pthread_mutex_lock(&mutex); // Wait until we gain access to the mutex
    playerAGems -= 20;
    playerBGems += 20;
    NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    pthread_mutex_unlock(&mutex); // Unlock the mutex
    return NULL;
}

void *thread2(void *arg) {
    // Move 50 gems from player B to player A
    pthread_mutex_lock(&mutex);
    playerAGems += 50;
    playerBGems -= 50;
    NSLog(@"Player A now has %d gems, and player B has %d gems.", playerAGems, playerBGems);
    pthread_mutex_unlock(&mutex);
    return NULL;
}

int main() {
    pthread_mutex_init(&mutex, NULL); // Initialize the mutex
    pthread_t t1; // Data structure to hold information about thread 1
    pthread_t t2; // Data structure to hold information about thread 2
    pthread_create(&t1, NULL, thread1, NULL);
    pthread_create(&t2, NULL, thread2, NULL);

    dispatch_main();
}

Read more…

Why GCD?

July 31, 2014

GCD Logo

This is the first post in a series about Grand Central Dispatch.

In short: better parallelism.

In the last decade, CPU speed improvements have hit a wall. To continue to get more performance out of the same clock speed, CPU manufacturers have been including more and more cores on their dies (my MacBook Pro has 8 cores and my Mac Pro at work has 24).

CPU speed over the years

(Source)

The problem is, programs can’t take advantage of these extra cores unless it knows how to farm out its work effectively.

Parallelism deals with the problem of doling out multiple jobs to be done at the same time. This is a very hard problem in general, but GCD makes it a little easier to manage.

You may not have a need to write a massively parallel program, but you can still use GCD to make a more responsive program by not having your UI or service wait for jobs to complete. Typically, you want to toss jobs to other cores while your main program continues serving requests asynchronously. GCD makes this, and other similar tasks, easier.

Read more…

Newer Posts