- Feb 19, 2007
-
-
Daniel Stenberg authored
5).
-
- Feb 18, 2007
-
-
Daniel Stenberg authored
the multi interface and connection re-use that could make a curl_multi_remove_handle() ruin a pointer in another handle. The second problem was less of an actual problem but more of minor quirk: the re-using of connections wasn't properly checking if the connection was marked for closure.
-
- Feb 16, 2007
-
-
Daniel Stenberg authored
CURLOPT_RANGE back to no range on an easy handle when using FTP.
-
- Feb 14, 2007
-
-
Dan Fandrich authored
-
- Feb 13, 2007
-
-
Daniel Stenberg authored
the left side of @ to make it short(er).
-
Daniel Stenberg authored
gmtime_r() like the older VC versions. He also made use of some machine- specific defines to differentiate the "OS" define.
-
- Feb 12, 2007
-
-
Daniel Stenberg authored
-
Daniel Stenberg authored
SSL/TLS layer. http://www.mozilla.org/projects/security/pki/nss/
-
Daniel Stenberg authored
to the debug callback. - Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's internal decoding of content or transfer encoded content. This may be preferable in cases where you use libcurl for proxy purposes or similar. The command line tool got a --raw option to disable both at once.
-
Daniel Stenberg authored
that has an easy handle present in the "closure" list pending closure.
-
- Feb 06, 2007
-
-
Daniel Stenberg authored
-
- Feb 05, 2007
-
-
Daniel Stenberg authored
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the timeouts with millisecond resolution instead. The only restriction to that is the alarm() (sometimes) used to abort name resolves as that uses full seconds. I fixed the FTP response timeout part of the patch. Internally we now count and keep the timeouts in milliseconds but it also means we multiply set timeouts with 1000. The effect of this is that no timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which equals 24.86 days. We probably couldn't before either since the code did *1000 on the timeout values on several places already.
-
- Feb 03, 2007
-
-
Daniel Stenberg authored
fail since they used "1 feb 2007"... - Manfred Schwarb reported that socks5 support was broken and help us pinpoint the problem. The code now tries harder to use httproxy and proxy where apppropriate, as not all proxies are HTTP...
-
- Jan 29, 2007
-
-
Daniel Stenberg authored
-
Daniel Stenberg authored
header, you got _two_ User-Agent headers in the CONNECT request...! Added test case 287 to verify the fix.
-
- Jan 28, 2007
-
-
Daniel Stenberg authored
-
- Jan 27, 2007
-
-
Daniel Stenberg authored
platforms.
-
- Jan 25, 2007
-
-
Daniel Stenberg authored
ordinary curl command line, and you will get a libcurl-using source code written to the file that does the equivalent operation of what your command line operation does!
-
Dan Fandrich authored
variable from being properly used in many cases (and caused test case 63 to fail).
-
- Jan 23, 2007
-
-
Daniel Stenberg authored
#1 There's a compilation error in http_ntlm.c if USE_NTLM2SESSION is NOT defined. I noticed this while testing various configurations. Line 867 of the current http_ntlm.c is a closing bracket for an if/else pair that only gets compiled in if USE_NTLM2SESSION is defined. But this closing bracket wasn't in an #ifdef so the code fails to compile unless USE_NTLM2SESSION was defined. Lines 198 and 140 of my patch wraps that closing bracket in an #ifdef USE_NTLM2SESSION. #2 I noticed several picky compiler warnings when DEBUG_ME is defined. I've fixed them with casting. By the way, DEBUG_ME was a huge help in understanding this code. #3 Hopefully the last non-ASCII conversion patch for libcurl in a while. I changed the "NTLMSSP" literal to hex since this signature must always be in ASCII. Conversion code was strategically added where necessary. And the Curl_base64_encode calls were changed so the binary "blobs" http_ntlm.c creates are NOT translated on non-ASCII platforms.
-
Dan Fandrich authored
are not, due mainly to the lack of support for XML character entities (e.g. & => & ). This will make it easier to validate test files using tools like xmllint, as well as edit and view them using XML tools.
-
- Jan 16, 2007
-
-
Daniel Stenberg authored
doing an FTP transfer is removed from a multi handle before completion. The fix also fixed the "alive counter" to be correct on "premature removal" for all protocols.
-
Dan Fandrich authored
detector. Also changed tftp downloads to URL-unescape the downloaded file name.
-
- Jan 14, 2007
-
-
Daniel Stenberg authored
non-ASCII platforms. It does add some complexity, most notably with more #ifdefs, but I want to see this supported added and I can't see how we can add it without the extra stuff added.
-
- Jan 13, 2007
-
-
Daniel Stenberg authored
-
- Jan 05, 2007
-
-
Daniel Stenberg authored
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it will make libcurl shutdown SSL/TLS after the authentication is done on a FTP-SSL operation.
-
- Jan 03, 2007
-
-
Daniel Stenberg authored
non-ASCII platforms.
-
Daniel Stenberg authored
downloaded data in two buffers, just to be able to deal with a special HTTP pipelining case. That is now only activated for pipelined transfers. In Matt's case, it showed as a considerable performance difference,
-
- Jan 02, 2007
-
-
Daniel Stenberg authored
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate (CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken on Windows (since 7.16.0, but that's when they were introduced as previous to that the limiting logic was made in the application only and not in the library). It was actually also broken on select()-based systems (as apposed to poll()) but we haven't had any such reports. We now use select(), Sleep() or delay() properly to sleep a while without waiting for anything input or output when the rate limiting is activated with the easy interface.
-
Daniel Stenberg authored
to get built static. It has been mentioned before and was again brought to our attention by Nathanael Nerode who filed debian bug report #405226 (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=405226).
-
- Dec 29, 2006
-
-
Daniel Stenberg authored
-
- Dec 22, 2006
-
-
Daniel Stenberg authored
get confused and not acknowledge the 'no_proxy' variable properly once it had used the proxy and you re-used the same easy handle. I made sure the proxy name is properly stored in the connect struct rather than the sessionhandle/easy struct.
-
Daniel Stenberg authored
variable to point to when it should be a socklen_t.
-
Daniel Stenberg authored
'curl [URL]' with a URL without a protocol prefix, curl would not send a correct request as it failed to add the protocol prefix.
-
- Dec 21, 2006
-
-
Daniel Stenberg authored
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a patch for it: when downloading 2 zero byte files in a row, curl 7.16.0 enters an infinite loop, while curl 7.16.1-20061218 does one additional unnecessary request. Fix: During the "Major overhaul introducing http pipelining support and shared connection cache within the multi handle." change, headerbytecount was moved to live in the Curl_transfer_keeper structure. But that structure is reset in the Transfer method, losing the information that we had about the header size. This patch moves it back to the connectdata struct.
-
- Dec 16, 2006
-
-
Daniel Stenberg authored
during certain conditions when GnuTLS is used.
-
- Dec 11, 2006
-
-
Daniel Stenberg authored
something went wrong like it got a bad response code back from the server, libcurl would leak memory. Added test case 538 to verify the fix. I also noted that the connection would get cached in that case, which doesn't make sense since it cannot be re-use when the authentication has failed. I fixed that issue too at the same time, and also that the path would be "remembered" in vain for cases where the connection was about to get closed.
-
- Dec 06, 2006
-
-
Daniel Stenberg authored
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections getting cut off prematurely when --limit-rate is used. While I found no such problems in my tests nor in my reading of the code, I found that the --limit-rate code was severly flawed (since it was moved into the lib, since 7.15.5) when used with the easy interface and it didn't work as documented so I reworked it somewhat and now it works for my tests.
-
- Dec 05, 2006
-
-
Daniel Stenberg authored
passing a curl_off_t argument to the Curl_read_rewind() function which takes an size_t argument. Curl_read_rewind() also had debug code left in it and it was put in a different source file with no good reason when only used from one single spot.
-
Daniel Stenberg authored
no code present in the library that receives the option. Since it was not possible to use, we know that no current users exist and thus we simply removed it from the docs and made the code always use the default path of the code.
-