Rate Limiting in Mercury
Here’s the translation of the Go rate limiting example to Java, presented in Markdown format suitable for Hugo:
Rate limiting is an important mechanism for controlling resource utilization and maintaining quality of service. Java supports rate limiting through various mechanisms, including ScheduledExecutorService
and third-party libraries.
First, we’ll look at basic rate limiting. Suppose we want to limit our handling of incoming requests. We’ll serve these requests from a queue.
import java.util.concurrent.*;
import java.time.Instant;
public class RateLimiting {
public static void main(String[] args) throws InterruptedException {
BlockingQueue<Integer> requests = new LinkedBlockingQueue<>();
for (int i = 1; i <= 5; i++) {
requests.offer(i);
}
// This ScheduledExecutorService will execute a task every 200 milliseconds.
// This is the regulator in our rate limiting scheme.
ScheduledExecutorService limiter = Executors.newScheduledThreadPool(1);
ScheduledFuture<?> limiterHandle = limiter.scheduleAtFixedRate(() -> {
try {
Integer req = requests.take();
System.out.println("request " + req + " " + Instant.now());
} catch (InterruptedException e) {
e.printStackTrace();
}
}, 0, 200, TimeUnit.MILLISECONDS);
// Wait for all requests to be processed
while (!requests.isEmpty()) {
Thread.sleep(100);
}
limiterHandle.cancel(false);
limiter.shutdown();
// We may want to allow short bursts of requests in our rate limiting scheme
// while preserving the overall rate limit. We can accomplish this by using
// a semaphore. This burstyLimiter will allow bursts of up to 3 events.
Semaphore burstyLimiter = new Semaphore(3);
// Every 200 milliseconds we'll add a permit to burstyLimiter, up to its limit of 3.
ScheduledExecutorService burstyLimiterRefresher = Executors.newScheduledThreadPool(1);
burstyLimiterRefresher.scheduleAtFixedRate(() -> {
burstyLimiter.release();
}, 200, 200, TimeUnit.MILLISECONDS);
// Now simulate 5 more incoming requests. The first 3 of these will benefit
// from the burst capability of burstyLimiter.
BlockingQueue<Integer> burstyRequests = new LinkedBlockingQueue<>();
for (int i = 1; i <= 5; i++) {
burstyRequests.offer(i);
}
ExecutorService executor = Executors.newFixedThreadPool(5);
for (int i = 0; i < 5; i++) {
executor.submit(() -> {
try {
burstyLimiter.acquire();
Integer req = burstyRequests.take();
System.out.println("request " + req + " " + Instant.now());
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
// Wait for all bursty requests to be processed
while (!burstyRequests.isEmpty()) {
Thread.sleep(100);
}
executor.shutdown();
burstyLimiterRefresher.shutdown();
}
}
Running our program, we see the first batch of requests handled once every ~200 milliseconds as desired.
request 1 2023-06-01T10:15:00.123Z
request 2 2023-06-01T10:15:00.323Z
request 3 2023-06-01T10:15:00.523Z
request 4 2023-06-01T10:15:00.723Z
request 5 2023-06-01T10:15:00.923Z
For the second batch of requests, we serve the first 3 immediately because of the burstable rate limiting, then serve the remaining 2 with ~200ms delays each.
request 1 2023-06-01T10:15:01.123Z
request 2 2023-06-01T10:15:01.123Z
request 3 2023-06-01T10:15:01.123Z
request 4 2023-06-01T10:15:01.323Z
request 5 2023-06-01T10:15:01.523Z
This Java implementation uses ScheduledExecutorService
to simulate the ticker behavior, and a Semaphore
to implement the bursty limiter. The overall structure and behavior are similar to the original example, adapted to Java’s concurrency utilities.