Newer
Older
chunked transfer-encoding. Given his fine example-site, I could easily spot
the problem and when I re-read the spec (the part I have pasted in the top
of the http_chunks.h file), I realized I had made my state-machine slightly
wrong and didn't expect/handle the trailing CRLF that comes after the data
in each chunk (and those extra two bytes sure feel wasted).
Had to modify test case 34 to match this as well.
Version 7.7-beta2
Daniel (13 March 2001)
- Added the policy stuff to the curl_easy_setopt man page for the two supported
policies.
- Implemented some support for the CURLOPT_CLOSEPOLICY option. The policies
CURLCLOSEPOLICY_LEAST_RECENTLY_USED and CURLCLOSEPOLICY_OLDEST are now
supported, and the "least recently used" is used as default if no policy
is chosen.
- Added CURLOPT_RANDOM_FILE and CURLOPT_EGDSOCKET to libcurl for seeding the
SSL random engine. The random seeding support was also brought to the curl
client with the new options --random-file <file> and --egd-file <file>. I
need some people to really test this to know they work as supposed. Remember
that libcurl now informs (if verbose is on) if the random seed is considered
weak (HTTPS connections).
- Made the chunked transfer-encoding engine detected bad formatted data length
and return error if so (we can't possibly extract sensible data if this is
the case). Added a test case that detects this. Number 36. Now there are 60
test cases.
- Added 5 new libcurl options to curl/curl.h that can be used to control the
persistent connection support in libcurl. They're also documented (fairly
thoroughly) in the curl_easy_setopt.3 man page. Three of them are now
implemented, although not really tested at this point... Anyway, the new
implemented options are named CURLOPT_MAXCONNECTS, CURLOPT_FRESH_CONNECT,
CURLOPT_FORBID_REUSE. The ones still left to write code for are:
CURLOPT_CLOSEPOLICY and its related option CURLOPT_CLOSEFUNCTION.
- Made curl (the actual command line tool) use the new libcurl 7.7 persistent
connection support by re-using the same curl handle for every specified file
transfer and after some more test case tweaking we have 100% test case OK.
I made some test cases return HTTP/1.0 now to make sure that works as well.
- Had to add 'Connection: close' to the headers of a bunch of test cases so
that curl behaves "old-style" since the test http server doesn't do multiple
connections... Now I get 100% test case OK.
- The curl.haxx.se site, the main curl mailing list and my personal email are
all dead today due to power blackout in the area where the main servers are
located. Horrible.
- I've made persistance work over a squid HTTP proxy. I find it disturbing
that it uses headers that aren't present in any HTTP standard though
(Proxy-Connection:) and that makes me feel that I'm now on the edge of what
the standard actually defines. I need to get this code excercised on a lot
of different HTTP proxies before I feel safe.
Now I'm facing the problem with my test suite servers (both FTP and HTTP)
not supporting persistent connections and libcurl is doing them now. I have
to fix the test servers to get all the test cases do OK.
Daniel (8 March 2001)
- Guenole Bescon reported that libcurl did output errors to stderr even if
MUTE and NOPROGRESS was set. It turned out to be a bug and happens if
there's an error and no ERRORBUFFER is set. This is now corrected.
Version 7.7-beta1
Daniel (8 March 2001)
- "Transfer-Encoding: chunked" is no longer any trouble for libcurl. I've
added two source files and I've run some test downloads that look fine.
- HTTP HEAD works too, even on 1.1 servers.
Daniel (5 March 2001)
- The current 57 test cases now pass OK. It would suggest that libcurl works
using the old-style with one connection per handle. The test suite doesn't
handle multiple connections yet so there are no test cases for this.
- I patched the telnet.c heavily to not use any global variables anymore. It
should make it a lot nicer library-wise.
- The file:// support was modified slightly to use the internal connect-first-
then-do approach.
Daniel (4 March 2001)
- More bugs erased.
Version 7.7-alpha2
Daniel (4 March 2001)
- Now, there's even a basic check that a re-used connection is still alive
before it is assumed so. A few first tests have proven that libcurl will
then re-connect instead of re-use the dead connection!
Daniel (2 March 2001)
- Now they work intermixed as well. Major coolness!
- More fiddling around, my 'tiny' client I have for testing purposes now has
proved to download both FTP and HTTP with persistent connections. They do
not work intermixed yet though.
Daniel (1 March 2001)
- Wilfredo Sanchez pointed out a minor spelling mistake in a man page and that
curl_slist_append() should take a const char * as second argument. It does
now.
Daniel (22 February 2001)
- The persistent connections start to look good for HTTP. On a subsequent
request, it seems that libcurl now can pick an already existing connection
if a suitable one exists, or it opens a new one.
- Douglas R. Horner mailed me corrections to the curl_formparse() man page
that I applied.
Daniel (20 February 2001)
- Added the docs/examples/win32sockets.c file for our windows friends.
- Linus Nielsen Feltzing provided brand new TELNET functionality and
improvements:
* Negotiation is now passive. Curl does not negotiate until the peer does.
* Possibility to set negotiation options on the command line, currently only
XDISPLOC, TTYPE and NEW_ENVIRON (called NEW_ENV).
* Now sends the USER environment variable if the -u switch is used.
* Use -t to set telnet options (Linus even updated the man page, awesome!)
- Haven't done this big changes to curl for a while. Moved around a lot of
struct fields and stuff to make multiple connections get connection specific
data in separate structs so that they can co-exist in a nice way. See the
mailing lists for discussions around how this is gonna be implemented. Docs
and more will follow.
Studied the HTTP RFC to find out better how persistent connections should
work. Seems cool enough.
Daniel (19 February 2001)
- Bob Schader brought me two files that help set up a MS VC++ libcurl project
easier. He also provided me with an up-to-date libcurl.def file.
- I moved a bunch of prototypes from the public <curl/curl.h> file to the
library private urldata.h. This is because of the upcoming changes. The
low level interface is no longer being planned to become reality.
Daniel (15 February 2001)
- CURLOPT_POST is not required anymore. Just setting the POST string with
CURLOPT_POSTFIELDS will switch on the HTTP POST. Most other things in
libcurl already works this way, i.e they require only the parameter to
switch on a feature so I think this works well with the rest. Setting a NULL
string switches off the POST again.
- Excellent suggestions from Rich Gray, Rick Jones, Johan Nilsson and Bjorn
Reese helped me define a way how to incorporate persistent connections into
libcurl in a very smooth way. If done right, no change may have to be made
to older programs and they will just start using persistent connections when
applicable!
Daniel (13 February 2001)
- Changed the word 'timeouted' to 'timed out' in two different error messages.
Suggested by Larry Fahnoe.
Daniel (9 February 2001)
- Frank Reid and Cain Hopwood provided information and research around a HTTPS
PUT/upload problem we seem to have. No solution found yet.
Daniel (8 February 2001)
- An interesting discussion is how to specify an empty password without having
curl ask for it interactively? The current implmentation takes an empty
password as a request for a password prompt. However, I still want to
support a blank user field. Thus, today if you enter "-u :" (without user
and password) curl will prompt for the password. Tricky. How would you
specify you want the prompt otherwise?
- Made the netrc parse result possible to use for other protocols than FTP and
HTTP (such as the upcoming TELNET fixes).
- The previously mentioned "MSVC++ problems" turned out to be a non-issue.
- Added a HTTP file upload code example in the docs/examples/ section on
request.
- Adjusted the FTP response fix slightly.
Version 7.6.1-pre3
- S. Moonesamy found a flaw in the response reading function for FTP that
could make libcurl not get out of the loop properly when it should, if
libcurl got -1 returned when reading the socket.
- I found a similar mistake in http.c when using a proxy and reading the
results from the proxy connection.
Daniel (6 February 2001)
- S. Moonesamy pointed out that the VC makefile in src/ needed the libpath set
for the debug build to work.
- Daniel Gehriger stepped in to assist with the VC++ stuff Robert Weaver
brought up yesterday.
- Jun-ichiro itojun Hagino brought a big patch that brings IPv6-awareness to
a bunch of different areas within libcurl.
- Robert Weaver told me about the problems the MS VC++ 6.0 compiler has with
the 'static' keyword on a number of libcurl functions. I might need to add a
patch that redefines static when libcurl is compiled with that compiler.
Daniel (4 February 2001)
- curl_getinfo() was extended with two new options:
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD. They
return the full assumed content length of the transfer in the given
direction. The CURLINFO_CONTENT_LENGTH_DOWNLOAD will be the Content-Length:
size of a HTTP download. Added descriptions to the man page as well. This
was done after discussions with Bob Schader.
Daniel (3 February 2001)
- Ingo Ralf Blum provided another fix that makes curl build under the more
recent cygwin installations. It seems they've changed the preset defines to
not include WIN32 anymore.
Version 7.6.1-pre2
Daniel (31 January 2001)
- Curl_read() and curl_read() now return a ssize_t for the size, as it had to
be able to return -1. The telnet support crashed due to this and there was a
possibility to weird behavior all over. Linus Nielsen Feltzing helped me
- Added a configure.in check for a working getaddrinfo() if IPv6 is requested.
I also made the configure script feature --enable-debug which sets a couple
of compiler options when used. It assumes gcc.
Daniel (30 January 2001)
- I finally took a stab at the long-term FIXME item I've had on myself, and
now libcurl will properly work when doing a HTTP range-request that follows
a Location:. Previously that would make libcurl fail saying that the server
doesn't seem to support range requests.
- I added a test case for the HTTP PUT resume thing (test case 33).
Version 7.6.1-pre1
Daniel (29 January 2001)
- Yet another Content-Range change. Ok now? Bob Schader checks from his end
and it works for him.
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
Daniel (27 January 2001)
- So the HTTP PUT resume fix wasn't good. There should appearantly be a
Content-Range header when resuming a PUT.
- I noticed I broke the download-check that verifies that a resumed HTTP
download is actually resumed. It got broke because my new 'httpreq' field
in the main curl struct. I should get slapped. I added a test case for
this now, so I won't be able to ruin this again without noticing.
- Added a test case for content-length verifying when downloading HTTP.
- Made the progress meter title say if the transfer is being transfered. It
makes the output slightly better for resumes.
- When dealing with Location: and HTTP return codes, libcurl will not attempt
to follow the spirit of RFC2616 better. It means that when POSTing to a
URL that is being following to a second place, the standard will judge on
what to do. All HTTP codes except 303 and 305 will cause curl to make a
second POST operation. 303 will make a GET and 305 is not yet supported.
I also wrote two test cases for this POST/GET/Location stuff.
Daniel (26 January 2001)
- Lots of mails back and forth with Bob Schader finally made me add a small
piece of code in the HTTP engine so that HTTP upload resume works. You can
now do an operation like 'curl -T file -C <offset> <URL>' and curl will PUT
the ending part of the file starting at given offet to the specified URL.
Version 7.6-pre4
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
Daniel (25 January 2001)
- I took hold of Rick Jones' question why we don't use recv() and send() for
reading/writing to the sockets and I've now modified the sread() and
swrite() macros to use them instead. If nothing else, they could be tested
in the next beta-round coming right up.
- Jeff Morrow found a problem with libcurl's usage of SSL_read() and supplied
his research results in how to fix this. It turns out we have to invoke the
function several times in some cases. The same goes for the SSL_write().
I made some rather drastic changes all over libcurl to make all writes and
reads get done on one single place so that this repeated-attempts thing
would only have to be implemented at one point.
- Rick Jones spotted that the 'total time' counter really didn't measure the
total time very accurate on subsecond levels.
- Johan Nilsson pointed out the need to more clearly specify that the timeout
value you set for a download is for the *entire* download. There's currently
no option available that sets a timeout for the connection phase only.
Daniel (24 January 2001)
- Ingo Ralf Blum submitted a series of patches required to get curl to compile
properly with cygwin.
- Robert Weaver posted a fix for the win32 section of the curl_getenv() code
that corrected a potential memory leak.
- Added comments in a few files in a sudden attempt to make the sources more
easy to read and understand!
Daniel (23 January 2001)
- Added simple IPv6 detection in the configure script and made the version
string add 'ipv6' to the enable section in that case. ENABLE_IPV6 will be
set if curl is compiled with IPv6 support enabled.
- Added a parser for IPv6-style specified IP-addresses in a URL. Thus, when
IPv6 gets enabled soon, we can use URLs like '[0::1]:80'...
- Made the URL globbing in the client possible to fail silently if there's an
error in the globbing. It makes it almost intuitive, so when you don't
follow the syntax rules, globbing is simply switched off and the raw string
is used instead.
I still think we'll get problems with IPv6-style IP-addresses when we *want*
globbing on parts of the URL as the initial part of the URL will for sure
seriously confuse the globber.
Daniel (22 January 2001)
- Björn Stenberg supplied a progress meter patch that makes it look better even
during slow starts. Previously it made some silly assumptions...
- Added two FTP tests for -Q and -Q - stuff since it was being discussed on
the mailing list. Had to correct the ftpserver.pl too as it bugged slightly.
Daniel (19 January 2001)
- Made the Location: parsers deal with any-length URLs. Thus I removed the last
code that restricts the length of URLs that curl supports.
- Added a --globoff test case (#28) and it quickly identified a memory problem
in src/main.c that I took care of.
Version 7.6-pre3
Daniel (17 January 2001)
- Made the two former files lib/download.c and lib/highlevel.c become the new
lib/transfer.c which makes more sense. I also did the rename from Transfer()
to Curl_Transfer() in the other source files that use the transfer function
in the spirit of using Curl_ prefix for library-scoped global symbols.
Daniel (11 January 2001)
Daniel Stenberg
committed
- Added -g/--globoff that switches OFF the URL globbing and thus enables {}[]
letters to be part of the URL. Do note that RFC2396 section 2.4.3 explicitly
mention these letters to be escaped. This was posted as a feature request by
Jorge Gutierrez and as a bug by Terry.
- Short options to curl that requires parameters can now be specified without
having the option and its parameter space separated. -ofile works as good as
-o file. -m20 is equal to -m 20. Do note that this goes for single-letter
options only, verbose --long-style options still must be separated with
space from their parameters.
- Francis Dagenais reported that the SCO compiler still fails when compiling
curl due to that getpass_r() prototype. I've now put it around #ifndef
HAVE_GETPASS_R in an attempt to please the SCO systems.
- Made some minor corrections to get the client to cleanup properly and I made
the separator work again when getting multiple globbed URLs to stdout.
- Worked with Loic Dachary to get the make dist and make distcheck work
Daniel Stenberg
committed
correctly. The 'maketgz' script is now using the automake generated 'make
dist' when creating release archives. Loic successfully made 'make rpms'
automatically build RPMs!
Loic Dachary (6 January 2001)
- Automated generation of rpm packages, no need to be root.
- make distcheck generates a proper distribution (EXTRA_DIST
in all Makefile.am modified to match FILES).
Daniel (5 January 2001)
- Huge client-side hack: now multiple URLs are supported. Any number of URLs
can be specified on the command line, and they'll all be downloaded. There
must be a corresponding -o or -O for each URL or the data will be written to
stdout. This needs more testing, time to release a 7.6-pre package.
- The krb4 support was broken in the release. Fixed now.
- Huge internal symbol rename operation. All non-static but still lib-internal
symbols should now be prefixed with 'Curl_' to prevent collisions with other
libs. All public symbols should be prefixed with 'curl_' and the rest should
be static and thus invisible to the outside world. I updated the INTERNALS
document to say this as well.
Daniel Stenberg
committed
Daniel (4 January 2001)
- As Kevin P Roth suggested, I've added text to the man page for every command
line option and what happens when you specify that option more than
once. That hasn't been exactly crystal clear before.
- Made the configure script possible to run from outside the source-tree. For
odd reasons I can't build curl properly outside though. It has to do with
curl's dependencies on libcurl...
- Cut off all older (dated 1999 and earlier) CHANGES entries from this file.
The older piece is named CHANGES.0 and is added to the CVS repository in
case anyone would need it.
- I added another file 'CVS-INFO' to the CVS. It contains information about
files in the CVS that aren't included in release archives and how to build
curl when you get the sources off CVS.
- Updated CONTRIBUTE and FAQ due to the new license.
Daniel Stenberg
committed
Daniel (3 January 2001)
- Renamed README.libcurl to LIBCURL
- Changed headers in all sources files to the new dual license concept of
curl: use the MIT/X derivate license *or* MPL. The LEGAL file was updated
accordingly and the MPL 1.1 and MIT/X derivate licenses are now part of the
release archive.