Skip to content
Commit c02660e7 authored by Luca Toscano's avatar Luca Toscano
Browse files

mod_ratelimit: fix behavior with proxied content

mod_ratelimit works by splitting data in "chunks"
to send to the client, sleeping a predefined amount
of time between them (200ms). So for example,
a rate-limit 40 value would correspond to a chunk size
of 8192 bytes, flushed to the client every 200ms.

The idea works fine when httpd directly serves the
content, since the filter will be called once with
a single bucket brigade. In the context of a proxied
content though the filter is likely to be called multiple
times, with a bucket brigade size that corresponds to
the maximum allowed buffer size. If this value is lower
or higher than the chunk size, the filter will not
properly rate limit the data going to the client.

This patch solves the problem with two fix:
1) do_sleep is now stored in the ctx context struct,
   so if the filter is invoked multiple times it
   will still sleep when needed. For example, say
   that the chunk_size is 8192 and the bucket brigate
   len is 10240: the filter will flush 8192 bytes
   on the first invocation, sleep 200ms, flush the
   remaining bytes and then finish. The next invocation
   will do the same, clearly not leading to the
   correct "sleeping pattern".
2) The example above highlights also another issue:
   mod_ratelimit should  flush only chunk_size bytes
   at the time (I am now excluding the burst calculation
   from the picture), and buffer between invocations
   unless the brigade contains EOS.

The change has been tested with various scenarios and
it looks working as expected, but of course more
feedback/testing is welcome.

The original patch was written by me and then Yann
refactored the code to be more precise and efficient,
basically transforming an axe in a wonderful Japanese
katana sword, so credits to him for this work.

PR: 62362




git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1833875 13f79535-47bb-0310-9956-ffa450edef68
parent 40530b34
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment