Mutexes in Cilk

In the previous example, we saw how to manage simple counter state using atomic operations. For more complex state, we can use a mutex to safely access data across multiple threads.

#include <cilk/cilk.h>
#include <cilk/cilk_api.h>
#include <iostream>
#include <map>
#include <mutex>
#include <string>

// Container holds a map of counters; since we want to
// update it concurrently from multiple threads, we
// add a mutex to synchronize access.
// Note that mutexes must not be copied, so if this
// struct is passed around, it should be done by
// pointer.
struct Container {
    std::mutex mu;
    std::map<std::string, int> counters;

    // Lock the mutex before accessing counters; unlock
    // it at the end of the function using a RAII lock_guard.
    void inc(const std::string& name) {
        std::lock_guard<std::mutex> lock(mu);
        counters[name]++;
    }
};

int main() {
    // Note that the zero value of a mutex is usable as-is, so no
    // initialization is required here.
    Container c;
    c.counters["a"] = 0;
    c.counters["b"] = 0;

    // This function increments a named counter
    // in a loop.
    auto doIncrement = [&](const std::string& name, int n) {
        for (int i = 0; i < n; i++) {
            c.inc(name);
        }
    };

    // Run several threads concurrently; note
    // that they all access the same Container,
    // and two of them access the same counter.
    cilk_spawn doIncrement("a", 10000);
    cilk_spawn doIncrement("a", 10000);
    doIncrement("b", 10000);

    cilk_sync;

    // Print the final counter values
    for (const auto& pair : c.counters) {
        std::cout << pair.first << ": " << pair.second << std::endl;
    }

    return 0;
}

Running the program shows that the counters updated as expected.

$ g++ -fcilkplus mutexes.cpp -o mutexes
$ ./mutexes
a: 20000
b: 10000

In this Cilk version, we’ve made several adaptations:

  1. We use cilk_spawn to create parallel tasks instead of goroutines.
  2. We use cilk_sync to wait for all spawned tasks to complete, similar to WaitGroup in Go.
  3. We use C++ std::mutex and std::lock_guard for synchronization.
  4. We use a C++ lambda function for doIncrement instead of a closure.
  5. We use std::map instead of Go’s built-in map type.

The overall structure and logic remain the same, demonstrating how to safely access shared state across multiple threads using a mutex.

Next, we’ll look at implementing this same state management task using only threads and message passing.