Mutexes in Ruby

In the previous example we saw how to manage simple counter state using atomic operations. For more complex state we can use a mutex to safely access data across multiple threads.

require 'thread'

# Container holds a hash of counters; since we want to
# update it concurrently from multiple threads, we
# add a Mutex to synchronize access.
class Container
  def initialize
    @mu = Mutex.new
    @counters = { "a" => 0, "b" => 0 }
  end

  # Lock the mutex before accessing @counters; unlock
  # it at the end of the method using ensure.
  def inc(name)
    @mu.synchronize do
      @counters[name] += 1
    end
  end

  attr_reader :counters
end

# Note that the Mutex is initialized in the constructor,
# so no additional initialization is required here.
c = Container.new

# This lambda increments a named counter
# in a loop.
do_increment = ->(name, n) do
  n.times { c.inc(name) }
end

# Run several threads concurrently; note
# that they all access the same Container,
# and two of them access the same counter.
threads = []
threads << Thread.new { do_increment.call("a", 10000) }
threads << Thread.new { do_increment.call("a", 10000) }
threads << Thread.new { do_increment.call("b", 10000) }

# Wait for the threads to finish
threads.each(&:join)

puts c.counters

Running the program shows that the counters updated as expected.

$ ruby mutexes.rb
{"a"=>20000, "b"=>10000}

Next we’ll look at implementing this same state management task using only threads and queues.

This Ruby code demonstrates the use of mutexes to safely manage shared state across multiple threads. Here are some key points:

  1. We use the Mutex class from Ruby’s thread library to create a mutex.
  2. The Container class encapsulates the shared state (a hash of counters) and the mutex.
  3. The inc method uses @mu.synchronize to ensure that only one thread can access the counters at a time.
  4. We create multiple threads that concurrently increment the counters.
  5. We use Thread.new to create new threads and thread.join to wait for them to finish.

This approach ensures that our counter increments are thread-safe, preventing race conditions that could occur with unsynchronized access to shared state.