Rate Limiting in D Programming Language

Here’s the translated code from Go to D Programming Language, along with explanations in Markdown format suitable for Hugo:

Rate limiting is an important mechanism for controlling resource utilization and maintaining quality of service. D elegantly supports rate limiting with threads, channels, and timers.

import std.stdio;
import core.thread;
import std.datetime;

void main()
{
    // First we'll look at basic rate limiting. Suppose
    // we want to limit our handling of incoming requests.
    // We'll serve these requests off a channel of the
    // same name.
    auto requests = new Channel!int(5);
    foreach (i; 1 .. 6)
    {
        requests.put(i);
    }
    requests.close();

    // This limiter timer will fire every 200 milliseconds.
    // This is the regulator in our rate limiting scheme.
    auto limiter = new Timer(200.msecs);

    // By waiting for the limiter timer before serving each request,
    // we limit ourselves to 1 request every 200 milliseconds.
    foreach (req; requests)
    {
        limiter.wait();
        writeln("request ", req, " ", Clock.currTime());
    }

    // We may want to allow short bursts of requests in
    // our rate limiting scheme while preserving the
    // overall rate limit. We can accomplish this by
    // using a semaphore. This burstyLimiter
    // will allow bursts of up to 3 events.
    auto burstyLimiter = new Semaphore(3);

    // Fill up the semaphore to represent allowed bursting.
    foreach (i; 0 .. 3)
    {
        burstyLimiter.notify();
    }

    // Every 200 milliseconds we'll try to add a new
    // value to burstyLimiter, up to its limit of 3.
    new Thread({
        while (true)
        {
            Thread.sleep(200.msecs);
            burstyLimiter.notify();
        }
    }).start();

    // Now simulate 5 more incoming requests. The first
    // 3 of these will benefit from the burst capability
    // of burstyLimiter.
    auto burstyRequests = new Channel!int(5);
    foreach (i; 1 .. 6)
    {
        burstyRequests.put(i);
    }
    burstyRequests.close();

    foreach (req; burstyRequests)
    {
        burstyLimiter.wait();
        writeln("request ", req, " ", Clock.currTime());
    }
}

Running our program we see the first batch of requests handled once every ~200 milliseconds as desired.

$ dmd rate_limiting.d
$ ./rate_limiting
request 1 2023-05-25T10:30:00.123456
request 2 2023-05-25T10:30:00.323456
request 3 2023-05-25T10:30:00.523456
request 4 2023-05-25T10:30:00.723456
request 5 2023-05-25T10:30:00.923456

For the second batch of requests we serve the first 3 immediately because of the burstable rate limiting, then serve the remaining 2 with ~200ms delays each.

request 1 2023-05-25T10:30:01.123456
request 2 2023-05-25T10:30:01.123457
request 3 2023-05-25T10:30:01.123458
request 4 2023-05-25T10:30:01.323456
request 5 2023-05-25T10:30:01.523456

In this D version, we use Channel for message passing, Timer for rate limiting, Semaphore for bursty rate limiting, and Thread for concurrent execution. The overall structure and logic remain similar to the original example, adapted to D’s syntax and standard library.