Rate Limiting in Chapel

Here’s the translated Chapel code and explanation in Markdown format suitable for Hugo:

Rate limiting is an important mechanism for controlling resource utilization and maintaining quality of service. Chapel elegantly supports rate limiting with tasks, channels, and timers.

use Time;

proc main() {
    // First we'll look at basic rate limiting. Suppose
    // we want to limit our handling of incoming requests.
    // We'll serve these requests off a channel of the
    // same name.
    var requests = new channel(int, 5);
    for i in 1..5 do
        requests.write(i);
    requests.close();

    // This limiter channel will receive a value
    // every 200 milliseconds. This is the regulator in
    // our rate limiting scheme.
    var limiter = new Timer();
    limiter.start();

    // By blocking on a receive from the limiter channel
    // before serving each request, we limit ourselves to
    // 1 request every 200 milliseconds.
    for req in requests.readAll() {
        limiter.stop();
        if limiter.elapsed() < 0.2 then
            Time.sleep(0.2 - limiter.elapsed());
        writeln("request ", req, " ", Time.getCurrentTime());
        limiter.clear();
        limiter.start();
    }

    // We may want to allow short bursts of requests in
    // our rate limiting scheme while preserving the
    // overall rate limit. We can accomplish this by
    // buffering our limiter channel. This burstyLimiter
    // channel will allow bursts of up to 3 events.
    var burstyLimiter = new channel(Time.timeSinceEpoch, 3);

    // Fill up the channel to represent allowed bursting.
    for i in 1..3 do
        burstyLimiter.write(Time.timeSinceEpoch());

    // Every 200 milliseconds we'll try to add a new
    // value to burstyLimiter, up to its limit of 3.
    begin {
        while true {
            Time.sleep(0.2);
            burstyLimiter.write(Time.timeSinceEpoch());
        }
    }

    // Now simulate 5 more incoming requests. The first
    // 3 of these will benefit from the burst capability
    // of burstyLimiter.
    var burstyRequests = new channel(int, 5);
    for i in 1..5 do
        burstyRequests.write(i);
    burstyRequests.close();

    for req in burstyRequests.readAll() {
        burstyLimiter.read();
        writeln("request ", req, " ", Time.getCurrentTime());
    }
}

Running our program we see the first batch of requests handled once every ~200 milliseconds as desired.

$ chpl rate-limiting.chpl -o rate-limiting
$ ./rate-limiting
request 1 2023-05-25 10:30:00.000
request 2 2023-05-25 10:30:00.200
request 3 2023-05-25 10:30:00.400
request 4 2023-05-25 10:30:00.600
request 5 2023-05-25 10:30:00.800

For the second batch of requests we serve the first 3 immediately because of the burstable rate limiting, then serve the remaining 2 with ~200ms delays each.

request 1 2023-05-25 10:30:01.000
request 2 2023-05-25 10:30:01.000
request 3 2023-05-25 10:30:01.000
request 4 2023-05-25 10:30:01.200
request 5 2023-05-25 10:30:01.400

In this Chapel version, we’ve used the Time module for timing operations and the channel type for communication between tasks. The begin statement is used to create a new task for the bursty limiter. The overall structure and logic of the rate limiting example is preserved, adapted to Chapel’s syntax and idioms.