Newer
Older
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
Daniel Stenberg
committed
- Michael Jahn:
Fixed FTP_SKIP_PASV_IP and FTP_USE_EPSV to "do right" when used on FTP thru
HTTP proxy.
Fixed PROXYTUNNEL to work fine when you do ftp through a proxy. It would
previously overwrite internal memory and cause unpredicted behaviour!
Daniel Stenberg
committed
Daniel (11 January 2006)
- I decided to document the "secret option" here now, as I've received *NO*
feedback at all on my mailing list requests from November 2005:
Daniel Stenberg
committed
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
I'm looking for feedback and comments. I added some experimental code the
other day, that allows a libcurl user to select what method libcurl should
use to reach a file on a FTP(S) server.
This functionality is available in CVS code and in recent daily snapshots.
Let me explain...
The current name for the option is CURLOPT_FTP_FILEMETHOD (--ftp-method for
the command line tool) and you set it to a long (there are currenly no
defines for the argument values, just plain numericals). You can set three
different "methods" that do this:
1 multicwd - like today, curl will do a single CWD operation for each path
part in the given URL. For deep hierarchies this means very many
commands. This is how RFC1738 says it should be done. This is the
default.
2 nocwd - no CWD at all is done, curl will do SIZE, RETR, STOR etc and give
a full path to the server.
3 singlecwd - make one CWD with the full target directory and then operate
on the file "normally".
(With the command line tool you do --ftp-method [METHOD], where [METHOD] is
one of "multicwd", "nocwd" or "singlecwd".)
What feedback I'm interested in:
1 - Do they work at all? Do you find servers where one of these don't work?
2 - What would proper names for the option and its arguments be, if we
consider this feature good enough to get included and documented in
upcoming releases?
3 - Should we make libcurl able to "walk through" these options in case of
(path related) failures, or should it fail and let the user redo any
possible retries?
(This option is not documented in any man page just yet since I'm not sure
these names will be used or if the functionality will end up exactly like
this. And for the same reasons we have no test cases for these yet.)
Daniel (10 January 2006)
- When using a bad path over FTP, as in when libcurl couldn't CWD into all
given subdirs, libcurl would still "remember" the full path as if it is the
current directory libcurl is in so that the next curl_easy_perform() would
get really confused if it tried the same path again - as it would not issue
any CWD commands at all, assuming it is already in the "proper" dir.
Starting now, a failed CWD command sets a flag that prevents the path to be
"remembered" after returning.
Daniel Stenberg
committed
Daniel (7 January 2006)
- Michael Jahn fixed so that the second CONNECT when doing FTP over a HTTP
proxy actually used a new connection and not sent the second request on the
first socket!
Daniel Stenberg
committed
Daniel (6 January 2006)
- Alexander Lazic made the buildconf run the buildconf in the ares dir if that
is present instead of trying to mimic that script in curl's buildconf
script.
Daniel (3 January 2006)
- Andres Garcia made the TFTP test server build with mingw.
Daniel (16 December 2005)
- Jean Jacques Drouin pointed out that you could only have a user name or
password of 127 bytes or less embedded in a URL, where actually the code
uses a 255 byte buffer for it! Modified now to use the full buffer size.
Daniel (12 December 2005)
- Dov Murik corrected the HTTP_ONLY define to disable the TFTP support properly
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
Version 7.15.1 (7 December 2005)
Daniel (6 December 2005)
- Full text here: http://curl.haxx.se/docs/adv_20051207.html Pointed out by
Stefan Esser.
VULNERABILITY
libcurl's URL parser function can overflow a malloced buffer in two ways, if
given a too long URL.
These overflows happen if you
1 - pass in a URL with no protocol (like "http://") prefix, using no slash
and the string is 256 bytes or longer. This leads to a single zero byte
overflow of the malloced buffer.
2 - pass in a URL with only a question mark as separator (no slash) between
the host and the query part of the URL. This leads to a single zero byte
overflow of the malloced buffer.
Both overflows can be made with the same input string, leading to two single
zero byte overwrites.
The affected flaw cannot be triggered by a redirect, but the long URL must
be passed in "directly" to libcurl. It makes this a "local" problem. Of
course, lots of programs may still pass in user-provided URLs to libcurl
without doing much syntax checking of their own, allowing a user to exploit
this vulnerability.
There is no known exploit at the time of this writing.
Daniel (2 December 2005)
- Jamie Newton pointed out that libcurl's file:// code would close() a zero
file descriptor if given a non-existing file.
Daniel Stenberg
committed
Daniel (24 November 2005)
- Doug Kaufman provided a set of patches to make curl build fine on DJGPP
again using configure.
- Yang Tse provided a whole series of patches to clear up compiler warnings on
MSVC 6.
Daniel Stenberg
committed
Daniel (17 November 2005)
- I extended a patch from David Shaw to make libcurl _always_ provide an error
string in the given error buffer to address the flaw mention on 21 sep 2005.
Daniel (16 November 2005)
- Applied Albert Chin's patch that makes the libcurl.pc pkgconfig file get
installed on 'make install' time.
- Quagmire reported that he needed to raise a NTLM buffer for SSPI to work
properly for a case, and so we did. We raised it even for non-SSPI builds
but it should not do any harm. http://curl.haxx.se/bug/view.cgi?id=1356715
- Jan Kunder's debian bug report
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=338680 identified a weird
error message for when you try to upload a file and the requested directory
doesn't exist on the target server.
- Yang Tse fixed compiler warnings in lib/ssluse.c with OpenSSL 0.9.8 and in
lib/memdebug.h that showed up in his msvc builds.
Daniel (13 November 2005)
Daniel Stenberg
committed
- Debian bug report 338681 by Jan Kunder: make curl better detect and report
bad limit-rate units:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=338681 Now curl will return
error if a bad unit is used.
- Thanks to this nice summary of poll() implementations:
http://www.greenend.org.uk/rjk/2001/06/poll.html and further tests by Eugene
Kotlyarov, we now know that cygwin's poll returns only POLLHUP on remote
connectin closure so we check for that case (too) and re-enable poll for
cygwin builds.
Daniel (12 November 2005)
Daniel Stenberg
committed
- Eugene Kotlyarov found out that cygwin's poll() function isn't doing things
right: http://curl.haxx.se/mail/archive-2005-11/0045.html so we now disable
poll() and use select() on cygwin too (we already do the same choice on Mac
OS X)
- Dima Barsky patched problem #1348930: the GnuTLS code completely ignored
client certificates! (http://curl.haxx.se/bug/view.cgi?id=1348930).
Daniel Stenberg
committed
Daniel (10 November 2005)
- David Lang fixed IPv6 support for TFTP!
Daniel Stenberg
committed
- Introducing range stepping to the curl globbing support. Now you can specify
step counter by adding :[num] within the brackets when specifying a range:
[1-100:10]
[a-z:2]
If no step counter is set, it defaults to 1 as before:
[1-100]
[d-h]
Daniel (8 November 2005)
Daniel Stenberg
committed
- Removed the use of AI_CANONNAME in the IPv6-enabled resolver functions since
we really have no use for reverse lookups of the address.
I truly hope these are the last reverse lookups we had lingering in the
Loading full blame...