Newer
Older
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
Daniel Stenberg
committed
Daniel (8 February 2006)
- Philippe Vaucher provided a brilliant piece of test code that show a problem
with re-used FTP connections. If the second request on the same connection
was set not to fetch a "body", libcurl could get confused and consider it an
attempt to use a dead connection and would go acting mighty strange.
Daniel (2 February 2006)
- Make --limit-rate [num] mean bytes. It used to be that but it broke in my
change done in November 2005.
Daniel Stenberg
committed
Daniel (30 January 2006)
Daniel Stenberg
committed
- Added CURLOPT_LOCALPORT and CURLOPT_LOCALPORTRANGE to libcurl. Set with the
curl tool with --local-port. Plain and simply set the range of ports to bind
the local end of connections to. Implemented on to popular demand.
Daniel Stenberg
committed
- Based on an error report by Philippe Vaucher, we no longer count a retried
connection setup as a follow-redirect. It turns out 1) this fails when a FTP
connection is re-setup and 2) it does make the max-redirs counter behave
Daniel Stenberg
committed
wrong.
Daniel Stenberg
committed
Daniel Stenberg
committed
Daniel (24 January 2006)
- Michal Marek provided a patch for FTP that makes libcurl continue to try
PASV even after EPSV returned a positive response code, if libcurl failed to
connect to the port number the EPSV response said. Obviously some people are
going through protocol-sensitive firewalls (or similar) that don't
understand EPSV and then they don't allow the second connection unless PASV
was used. This also called for a minor fix of test case 238.
Daniel (20 January 2006)
- Duane Cathey was one of our friends who reported that curl -P [IP]
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
Daniel Stenberg
committed
Daniel (19 January 2006)
- Jon Turner pointed out that doing -P [hostname] (CURLOPT_FTPPORT) with curl
(built ipv4-only) didn't work.
Daniel Stenberg
committed
Daniel Stenberg
committed
Daniel (18 January 2006)
- As reported in bug #1408742 (http://curl.haxx.se/bug/view.cgi?id=1408742),
the configure script complained about a missing "missing" script if you ran
configure within a path whose name included one or more spaces. This is due
to a flaw in automake (1.9.6 and earlier). I've now worked around it by
including an "overloaded" version of the AM_MISSING_HAS_RUN script that'll
be used instead of the one automake ships with. This kludge needs to be
removed once we get an automake version with this problem corrected.
Possibly we'll then need to convert this into a kludge depending on what
automake version that is used and that is gonna be painful and I don't even
want to think about that now...!
Daniel (17 January 2006)
- David Shaw: Here is the latest libcurl.m4 autoconf tests. It is updated with
the latest features and protocols that libcurl supports and has a minor fix
to better deal with the obscure case where someone has more than one libcurl
installed at the same time.
Daniel (16 January 2006)
- David Shaw finally removed all traces of Gopher and we are now officially
not supporting it. It hasn't been functioning for years anyway, so this is
just finally stating what already was true. And a cleanup at the same time.
- Bryan Henderson turned the 'initialized' variable for curl_global_init()
into a counter, and thus you can now do multiple curl_global_init() and you
are then supposed to do the same amount of calls to curl_global_cleanup().
Bryan has also updated the docs accordingly.
Daniel Stenberg
committed
Daniel (13 January 2006)
- Andrew Benham fixed a race condition in the test suite that could cause the
test script to kill all processes in the current process group!
Daniel Stenberg
committed
- Michael Jahn:
Fixed FTP_SKIP_PASV_IP and FTP_USE_EPSV to "do right" when used on FTP thru
HTTP proxy.
Fixed PROXYTUNNEL to work fine when you do ftp through a proxy. It would
previously overwrite internal memory and cause unpredicted behaviour!
Daniel Stenberg
committed
Daniel (11 January 2006)
- I decided to document the "secret option" here now, as I've received *NO*
feedback at all on my mailing list requests from November 2005:
Daniel Stenberg
committed
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
I'm looking for feedback and comments. I added some experimental code the
other day, that allows a libcurl user to select what method libcurl should
use to reach a file on a FTP(S) server.
This functionality is available in CVS code and in recent daily snapshots.
Let me explain...
The current name for the option is CURLOPT_FTP_FILEMETHOD (--ftp-method for
the command line tool) and you set it to a long (there are currenly no
defines for the argument values, just plain numericals). You can set three
different "methods" that do this:
1 multicwd - like today, curl will do a single CWD operation for each path
part in the given URL. For deep hierarchies this means very many
commands. This is how RFC1738 says it should be done. This is the
default.
2 nocwd - no CWD at all is done, curl will do SIZE, RETR, STOR etc and give
a full path to the server.
3 singlecwd - make one CWD with the full target directory and then operate
on the file "normally".
(With the command line tool you do --ftp-method [METHOD], where [METHOD] is
one of "multicwd", "nocwd" or "singlecwd".)
What feedback I'm interested in:
1 - Do they work at all? Do you find servers where one of these don't work?
2 - What would proper names for the option and its arguments be, if we
consider this feature good enough to get included and documented in
upcoming releases?
3 - Should we make libcurl able to "walk through" these options in case of
(path related) failures, or should it fail and let the user redo any
possible retries?
(This option is not documented in any man page just yet since I'm not sure
these names will be used or if the functionality will end up exactly like
this. And for the same reasons we have no test cases for these yet.)
Daniel (10 January 2006)
- When using a bad path over FTP, as in when libcurl couldn't CWD into all
given subdirs, libcurl would still "remember" the full path as if it is the
current directory libcurl is in so that the next curl_easy_perform() would
get really confused if it tried the same path again - as it would not issue
any CWD commands at all, assuming it is already in the "proper" dir.
Starting now, a failed CWD command sets a flag that prevents the path to be
"remembered" after returning.
Daniel Stenberg
committed
Daniel (7 January 2006)
- Michael Jahn fixed so that the second CONNECT when doing FTP over a HTTP
proxy actually used a new connection and not sent the second request on the
first socket!
Daniel Stenberg
committed
Daniel (6 January 2006)
- Alexander Lazic made the buildconf run the buildconf in the ares dir if that
is present instead of trying to mimic that script in curl's buildconf
script.
Daniel (3 January 2006)
- Andres Garcia made the TFTP test server build with mingw.
Daniel (16 December 2005)
- Jean Jacques Drouin pointed out that you could only have a user name or
password of 127 bytes or less embedded in a URL, where actually the code
uses a 255 byte buffer for it! Modified now to use the full buffer size.
Daniel (12 December 2005)
- Dov Murik corrected the HTTP_ONLY define to disable the TFTP support properly
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
Version 7.15.1 (7 December 2005)
Daniel (6 December 2005)
- Full text here: http://curl.haxx.se/docs/adv_20051207.html Pointed out by
Stefan Esser.
VULNERABILITY
libcurl's URL parser function can overflow a malloced buffer in two ways, if
given a too long URL.
These overflows happen if you
1 - pass in a URL with no protocol (like "http://") prefix, using no slash
and the string is 256 bytes or longer. This leads to a single zero byte
overflow of the malloced buffer.
2 - pass in a URL with only a question mark as separator (no slash) between
the host and the query part of the URL. This leads to a single zero byte
overflow of the malloced buffer.
Both overflows can be made with the same input string, leading to two single
zero byte overwrites.
The affected flaw cannot be triggered by a redirect, but the long URL must
be passed in "directly" to libcurl. It makes this a "local" problem. Of
course, lots of programs may still pass in user-provided URLs to libcurl
without doing much syntax checking of their own, allowing a user to exploit
this vulnerability.
There is no known exploit at the time of this writing.
Loading full blame...