1. 18 Jul, 2017 1 commit
  2. 16 Jul, 2017 13 commits
  3. 14 Jul, 2017 4 commits
  4. 13 Jul, 2017 2 commits
  5. 12 Jul, 2017 8 commits
  6. 11 Jul, 2017 10 commits
    • Evgeny Kotkov's avatar
      mpm_winnt: Advertise support for preshutdown notifications in the service, · 70c97d70
      Evgeny Kotkov authored
      and perform shutdown in respond to SERVICE_CONTROL_PRESHUTDOWN.
      
      The pure shutdown notification leaves a small amount of time for the service
      to finish (and the allowed amount of time has been shrinking with every new
      version of Windows), and handling only it increases the chance of the process
      being killed by SCM, instead of gracefully shutting down.  Handling the
      preshutdown control code extends this period, and increases the chances of
      finishing everything properly when the machine is rebooted or shut down.
      
      (See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683241)
      
      Please note that although the preshutdown notifications are available only
      starting from Windows Vista, the code is compatible with the previous versions
      of Windows, since the SCM ignores unknown SERVICE_ACCEPT codes, and will
      still send an ordinary SERVICE_CONTROL_SHUTDOWN under old Windows
      versions.
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801659 13f79535-47bb-0310-9956-ffa450edef68
      70c97d70
    • Evgeny Kotkov's avatar
      mpm_winnt: Remove unused values of the io_state_e enum. · 7fd5b58c
      Evgeny Kotkov authored
      Submitted By: Ivan Zhakov <ivan {at} visualsvn.com>
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801657 13f79535-47bb-0310-9956-ffa450edef68
      7fd5b58c
    • Evgeny Kotkov's avatar
      d2c26c3b
    • Evgeny Kotkov's avatar
      mpm_winnt: Use a LIFO stack instead of a FIFO queue to hold unused · 9db53e77
      Evgeny Kotkov authored
      completion contexts, as that may significantly reduce the memory usage.
      
      This simple change can have a noticeable impact on the amount of memory
      consumed by the child process in various cases.  Every completion context
      in the queue has an associated allocator, and every allocator has it's
      ap_max_mem_free memory limit which is not given back to the operating
      system.  Once the queue grows, it cannot shrink back, and every allocator
      in each of the queued completion contexts keeps up to its max_free amount
      of memory.  The queue can only grow when a server has to serve multiple
      concurrent connections at once.
      
      With that in mind, consider a case with a server that doesn't encounter many
      concurrent connections most of the time, but has occasional spikes when
      it has to serve multiple concurrent connections.  During such spikes, the
      size of the completion context queue grows.
      
      The actual difference between using LIFO and FIFO orders shows up after
      such spikes, when the server is back to light load and doesn't see a lot
      of concurrency.  With FIFO order, every completion context in the queue
      will be used in a round-robin manner, thus using *every* available allocator
      one by one and ultimately claiming up to (N * ap_max_mem_free memory) from
      the OS.  With LIFO order, only the completion contexts that are close to
      the top of the stack will be used and reused for subsequent connections.
      Hence, only a small part of the allocators will be used, and this can
      prevent all other allocators from unnecessarily acquiring memory from
      the OS (and keeping it), and this reduces the overall memory footprint.
      
      Please note that this change doesn't affect the worst case behavior, as
      it's still (N * ap_max_mem_free memory), but tends to behave better in
      practice, for the reasons described above.
      
      Another thing worth considering is the new behavior when the OS decides
      to swap out pages of the child process, for example, in a close-to-OOM
      condition.  Handling every new connection after the swap requires the OS
      to load the memory pages for the allocator from the completion context that
      is used for this connection.  With FIFO order, the completion contexts are
      used one by one, and this would cause page loads for every new connection.
      With LIFO order, there will be almost no swapping, since the same completion
      context is going to be reused for subsequent new connections.
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801655 13f79535-47bb-0310-9956-ffa450edef68
      9db53e77
    • Evgeny Kotkov's avatar
      mpm_winnt: Drop the APLOG_DEBUG diagnostic saying how many thread · 8dbdd95c
      Evgeny Kotkov authored
      are blocked on the I/O completion port during the shutdown.
      
      Prior to r1801635, the shutdown code required to know the amount of blocked
      threads, as it has been dispatching the same amount of completion packets.
      But this no longer holds, and the only reason why we maintain the
      corresponding g_blocked_threads variable is because of this debug
      diagnostic message.
      
      Drop it in order to reduce complexity of the quite critical code in the
      winnt_get_connection() function and to reduce the amount of global
      variables.
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801640 13f79535-47bb-0310-9956-ffa450edef68
      8dbdd95c
    • Evgeny Kotkov's avatar
      mpm_winnt: Remove an unnecessary Sleep() in the winnt_accept() function. · 062c97a3
      Evgeny Kotkov authored
      This sleep occured in a situation when:
      
       - We don't have a free completion context in the queue
       - We can't add one, as doing so would exceed the max_num_completion_contexts
         limit (all worker threads are busy)
       - We have exceeded a 1 second timeout while waiting for it
      
      In this case, the Sleep() call is unnecessary, as there is no intermittent
      failure that can be waited out, but rather than that, it's an ordinary
      situation with all workers being busy.  Presumably, calling Sleep() here
      can be even considered harmful, as it affects the fairness between the
      listeners that are blocked waiting for the completion context.
      
      So, instead of calling Sleep() just check for the possible shutdown and
      immediately retry acquiring a completion context.  If all worker threads
      are still busy, the retry will block in the same WaitForSingleObject() call,
      which is fine.
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801639 13f79535-47bb-0310-9956-ffa450edef68
      062c97a3
    • Evgeny Kotkov's avatar
      mpm_winnt: Simplify the shutdown code that was waiting for multiple worker · 3168d3f9
      Evgeny Kotkov authored
      thread handles in batches.
      
      Starting from r1801636, there is no difference between ending the wait with
      one or multiple remaining threads.  This is because we terminate the process
      if at least one thread is still active when we hit a timeout.
      
      Therefore, instead of making an effort to evenly distribute and batch the
      handles with WaitForMultipleObjects(), we could just start from one end,
      and wait for one thread handle at a time.
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801637 13f79535-47bb-0310-9956-ffa450edef68
      3168d3f9
    • Evgeny Kotkov's avatar
      mpm_winnt: Avoid using TerminateThread() in case the shutdown routine · 3b84b357
      Evgeny Kotkov authored
      hits a timeout while waiting for the worker threads to exit.
      
      Using TerminateThread() can have dangerous consequences such as deadlocks —
      say, if the the thread is terminated while holding a lock or a heap lock
      in the middle of HeapAlloc(), as these locks would not be released.
      Or it can corrupt the application state and cause a crash.
      
      (See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686717)
      
      Rework the code to call TerminateProcess() in the described circumstances
      and leave the cleanup to the operating system.
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801636 13f79535-47bb-0310-9956-ffa450edef68
      3b84b357
    • Evgeny Kotkov's avatar
      mpm_winnt: Make the shutdown faster by avoiding unnecessary Sleep()'s · 1c96d681
      Evgeny Kotkov authored
      when shutting down the worker threads.
      
      Previously, the shutdown code was posting an amount of I/O completion
      packets equal to the amount of the threads blocked on the I/O completion
      port.  Then it would Sleep() until all these threads "acknowledge" the
      completion packets by decrementing the global amount of blocked threads.
      
      A better way would be to send the number of IOCP_SHUTDOWN completion
      packets equal to the total amount of threads and immediately proceed to
      the next step.  There is no need to block until the threads actually receive
      the completion, as the shutdown process includes a separate step that waits
      until the threads exit, and the new approach avoids an unnecessary delay.
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801635 13f79535-47bb-0310-9956-ffa450edef68
      1c96d681
    • Jean-Frederic Clere's avatar
      · 4c7fa22a
      Jean-Frederic Clere authored
      Add logic to read the Upgrade header and use it in the response.
      Use we you are proxying to a server that has multiple upgrade on the same IP/Port.
      PR 61142
      
      
      git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1801594 13f79535-47bb-0310-9956-ffa450edef68
      4c7fa22a
  7. 10 Jul, 2017 1 commit
  8. 07 Jul, 2017 1 commit