Currently the rate limits for various user actions are set very low like 5 minutes or 30 seconds, etc.
Hypothetically, if the rate limit for a user action is very high like 24 hours or a week or so..
Does the system keeps running a process for each user action continuously for 24 hours or a week and consume the processing power and RAM on our server?
Or does it check the rate limits only when the user attempts a particular action?
Thanks.
Top comments (1)
Rate limits are stored in counters in redis, with a key per limited item per accessor (articles or comments created per user id, for example) - and rate limited actions are checked against this counter, which is updated with an expiration time.
While redis is consuming memory on the system, and maintaining a listening port for network connections (so some cpu), it's a shared resource with many other uses, like sidekiq (background job processing queues are kept in redis), and caching (rendered pieces of the page are kept in redis so they don't need to be regenerated each time). The marginal cost to maintain and check the rate limits per-user, or per-ip, are limited to a single (local) network check for the stored count in a key, typically only a few milliseconds.
Only some actions are rate limited, you can get a rough sense of which items are limited (and the timeframes these are measured over) in the code. These checks are done during the request processing, before returning a response, for those actions, to prevent spending time or energy when the rate limit is reached.
You can adjust the trigger levels for rate limits in the admin settings, the period (how long to monitor for specific events before clearing the counters) is currently set in the code (linked earlier) as a constant time.
The goal is more to reduce the impact of aggressive bot behaviors (including spam actors) but usually shouldn't impact normal human users, raising the time periods for these activities to 24 hours would definitely increase the number of impacted users.