Rate Limiting in Erlang

-module(rate_limiting).
-export([main/0]).

main() ->
    % First we'll look at basic rate limiting. Suppose
    % we want to limit our handling of incoming requests.
    % We'll serve these requests off a process of the
    % same name.
    {Requests, RequestsSender} = spawn_monitor(fun() ->
        [self() ! {request, I} || I <- lists:seq(1, 5)],
        exit(normal)
    end),
    
    % This limiter process will send a message
    % every 200 milliseconds. This is the regulator in
    % our rate limiting scheme.
    Limiter = spawn(fun() -> limiter() end),
    
    % By waiting for a message from the limiter process
    % before serving each request, we limit ourselves to
    % 1 request every 200 milliseconds.
    handle_requests(Requests, RequestsSender, Limiter),
    
    % We may want to allow short bursts of requests in
    % our rate limiting scheme while preserving the
    % overall rate limit. We can accomplish this by
    % using a separate process. This bursty_limiter
    % process will allow bursts of up to 3 events.
    BurstyLimiter = spawn(fun() -> bursty_limiter(3) end),
    
    % Now simulate 5 more incoming requests. The first
    % 3 of these will benefit from the burst capability
    % of bursty_limiter.
    {BurstyRequests, BurstyRequestsSender} = spawn_monitor(fun() ->
        [self() ! {request, I} || I <- lists:seq(1, 5)],
        exit(normal)
    end),
    
    handle_requests(BurstyRequests, BurstyRequestsSender, BurstyLimiter).

limiter() ->
    receive
        {request, From} ->
            From ! ok,
            timer:sleep(200),
            limiter()
    end.

bursty_limiter(Tokens) ->
    receive
        {request, From} when Tokens > 0 ->
            From ! ok,
            bursty_limiter(Tokens - 1);
        {request, From} ->
            From ! ok,
            timer:sleep(200),
            bursty_limiter(min(Tokens + 1, 3))
    end.

handle_requests(Requests, RequestsSender, Limiter) ->
    receive
        {request, ReqNum} ->
            Limiter ! {request, self()},
            receive
                ok ->
                    io:format("request ~p ~p~n", [ReqNum, erlang:timestamp()])
            end,
            handle_requests(Requests, RequestsSender, Limiter);
        {'DOWN', _, process, Requests, normal} ->
            ok
    end.

This Erlang program demonstrates rate limiting using processes and message passing. Here’s how it works:

  1. We create a process to simulate incoming requests.

  2. For basic rate limiting, we create a limiter process that sends an ok message every 200 milliseconds.

  3. We handle requests by waiting for an ok message from the limiter before processing each request.

  4. For bursty rate limiting, we create a bursty_limiter process that allows up to 3 requests immediately, then throttles subsequent requests.

  5. We simulate another batch of requests to demonstrate the bursty limiter’s behavior.

Running this program will show the first batch of requests handled once every ~200 milliseconds as desired. For the second batch, the first 3 requests will be processed immediately due to the burst capability, then the remaining 2 with ~200ms delays each.

Note that Erlang’s concurrency model is based on lightweight processes and message passing, which is different from Go’s goroutines and channels. However, the concept of rate limiting is implemented similarly, using timed messages instead of tickers.