Skip to main content

Absurdly Powerful & Scalable Rate Limiting

All Your Limits In One Place

Use Prefab's UI to define your limits in one place. Prefab's Dynamic Configuration will push your limit definitions to all your servers instantly.

Use this for in-memory rate limits, rate limits backed by your Redis or rate limits in our cloud.

Respect Multiple Limits At Once

Fast, efficient, bombproof and cost-effective rate limiting libraries that don't require any Redis/Memcached on your end.

# this will only fire if we are within our secondly,
# minutely and daily limits
if Prefab.pass("apis:facebook:get_user")
Facebook.get_user(user_id)
end

Scales to Millions of Limits

Paying your usage tracking service by the event? Do you really need to send an 50 event types for Bob every time he loads a page? How about only sending bob's unique events every 5 minutes? Done.

To do that, you need a limit per user per distinct event. If you have 100,000 users and 50 distinct events, that's 5 million limits. That's a lot, but that's no problem for Prefab.

if Prefab.pass?("events:pageviews:/pricing/:user12312")
# this will fire 1 / hour / user
TrackingService.trackPage(user12312, "/pricing/")
end

Prefab will efficiently keep track of the millions of individual token buckets needed to make this work.

Granular Overrides

With clever limit key usage you can get a default limits for a key, and a more specific limit for a subkey.

if Prefab.pass?("events:pageviews:/pricing/:user12312")
# this will fire 1 / hour / user
TrackingService.trackPage(user12312, "/pricing/")
end
if Prefab.pass?("events:pageviews:/order_placed/:user12312")
# this will fire 1 / sec / user
TrackingService.trackPage(user12312, "/orded_placed/")
end

Deduplication

Only want to send the 'Welcome Email' once per person, but have some nasty race conditions that could lead to doing it twice? Put an eternal semaphore for 'welcome-email:bob@example.com' and let us store that bit for you forever.

# this will only return true once per user forever
if Prefab.pass?("send_email:welcome_email:user12312")
really_send_the_email()
end

Concurrency

Want a reliable distributed semaphore that doesn't require setting up Zookeeper or Consul and won't make you worry about LRU expiry?

Prefab rate limits can be returnable. That means you can acquire a limit, do some work, and then return it.

And it doesn't need to be a single semaphore. That's the beauty of doing this as a limit. You can have a limit of 10 per forever then clients can pull a lease for an return when they're done. If they don't return it, use customizable expiry so it will still be available for someone else to use.

# attempt to acquire 10 tokens
# accept partial if we can't get them all
jobs_to_run = Prefab.acquire("jobs:email-processor",
desired: 10,
partial: true)

run_jobs(jobs_to_run.count)

# return the leases
jobs_to_run.return_all

Use From Everything From In Memory -> Distributed

Each limit definition that you create can be created at any durability level.

L5_BOMBPROOF_DISTRIBUTED

Prefab.cloud coordinated limits. All reads and writes hit the backing store which replicates data across three regions to provide fault tolerance in the event of a server failure or Availability Zone outage. 3 year Persistent

L4_BEST_EFFORT_DISTRIBUTED [Default]

Prefab.cloud coordinated limits. Small likelihood of lost limit tracking in failure scenarios. Cheaper & faster than level-5.

L3_CUSTOMER_DISTRIBUTED

Bring your own Redis instance and just use Prefab rate limit configuration so you can change the limits on the fly.

L2_IN_PROCESS

In memory rate limiters, configurable on the fly with Prefab.

L1_IN_MEMORY

Ultimate speed with no coordination between threads.

Instantly Configurable, Absurdly Scalable Rate Limits