CHANGES 92.5 KB
Newer Older
Daniel Stenberg's avatar
Daniel Stenberg committed
Version 2.6
 Daniel Stenberg
 - Made the -l (loop) thing use the new CONF_FAILONERROR which makes
   urlget() return error code if non-successful. It also won't output anything
   then. Now finally removed the HTTP 1.0 and error 404 dependencies.
 - Added -I which uses the HEAD request to get the header only from a
   http-server.

Version 2.5
 Rafael Sagula
 - Made the progress meter use HHH:MM:SS instead of only seconds.

Version 2.4
 Daniel Stenberg
 - Added progress meter. It appears when downloading > BUFFER SIZE and
   mute is not selected. I found out that when downloading large files from
   really really slow sites, it is desirable to know the status of the
   download. Do note that some downloads are done unawaring of the size, which
   makes the progress meter less thrilling ;) If the output is sent to a tty,
   the progress meter is shut off.
 - Increased buffer size used for reading.
 - Added length checks in the user+passwd parsing.
 - Made it grok user+passwd for HTTP fetches. The trick is to base64
   encode the user+passwd and send an extra header line. Read chapter 11.1 in
   RFC2068 for details. I added it to be used just like the ftp one.  To get a
   http document from a place that requires user and password, use an URL
   like:

        http://user:passwd@www.site.to.leach/doc.html

   I also added the -u flag, since WHEN USING A PROXY YOU CAN'T SPECIFY THE
   USER AND PASSWORD WITH HTTP LIKE THAT. The -u flag works for ftp too, but
   not if used with proxy. To do the same as the above one, you can invoke:

        urlget -u user:passwd http://www.site.to.leach/doc.html

Version 2.3
 Rafael Sagula
 - Added "-o" option (output file)
 - Added URG_HTTP_NOT_FOUND return code.
   (Daniel's note:)
   Perhaps we should detect all kinds of errors and instead of writing that
   custom string for the particular 404-error, use the error text we actually
   get from the server. See further details in RFC2068 (HTTP 1.1
   definition). The current way also relies on a HTTP/1.0 reply, which newer
   servers might not do.
 - Looping mode ("-l" option). It's easier to get various split files.
   (Daniel's note:)
   Use it like 'urlget -l 1 http://from.this.site/file%d.html', which will
   make urlget to attempt to fetch all files named file1.html, file2.html etc
   until no more files are found. This is only a modification of the
   STAND_ALONE part, nothing in the urlget() function was modfified for this.
 Daniel Stenberg
 - Changed the -h to be -i instead. -h should be preserved to help use.
 - Bjorn Reese indicated that Borland _might_ use '_WIN32' instead of the
   VC++ WIN32 define and therefore I added a little fix for that.

Version 2.2
 Johan Andersson
 - The urlget function didn't set the path to url when using proxy.
 - Fixed bug with IMC proxy. Now using (almost) complete GET command.
  
 Daniel Stenberg
 - Made it compile on Solaris. Had to reorganize the includes a bit.
   (so Win32, Linux, SunOS 4 and Solaris 2 compile fine.)
 - Made Johan's keepalive keyword optional with the -k flag (since it
   makes a lot of urlgets take a lot longer time).
 - Made a '-h' switch in case you want the HTTP-header in the output.

Version 2.1
 Daniel Stenberg and Kjell Ericson
 - Win32-compilable
 - No more global variables
 - Mute option (no output at all to stderr)
 - Full range of return codes from urlget(), which is now written to be a
   function for easy-to-use in [other] programs.
 - Define STAND_ALONE to compile the stand alone urlget program
 - Now compiles with gcc options -ansi -Wall -pedantic ;)

Version 2.0
 - Introducing ftp GET support. The FTP URL type is recognized and used.
 - Renamed the project to 'urlget'.
 - Supports the user+passwd in the FTP URL (otherwise it tries anonymous
   login with a weird email address as password).

Version 1.5
 Daniel Stenberg
 - The skip_header() crap messed it up big-time. By simply removing that
   one we can all of a sudden download anything ;)
 - No longer requires a trailing slash on the URLs.
 - If the given URL isn't prefixed with 'http://', HTTP is assumed and
   given a try!
 - 'void main()' is history.

Version 1.4
 Daniel Stenberg
 - The gopher source used the ppath variable instead of path which could
   lead to disaster.

Version 1.3
 Daniel Stenberg
 - Well, I added a lame text about the time it took to get the data. I also
   fought against Johan to prevent his -f option (to specify a file name
   that should be written instead of stdout)! =)
 - Made it write 'connection refused' for that particular connect()
   problem.
 - Renumbered the version. Let's not make silly 1.0.X versions, this is
   a plain 1.3 instead.

Version 1.2
 Johan Andersson
 - Discovered and fixed the problem with getting binary files. puts() is
   now replaced with fwrite(). (Daniel's note: this also fixed the buffer
   overwrite problem I found in the previous version.)

 Rafael Sagula <sagula@inf.ufrgs.br>
 - Let "-p" before "-x".

 Daniel Stenberg <Daniel.Stenberg@sth.frontec.se>
 - Bugfixed the proxy usage. It should *NOT* use nor strip the port number
   from the URL but simply pass that information to the proxy. This also
   made the user/password fields possible to use in proxy [ftp-] URLs.
   (like in ftp://user:password@ftp.my.site:8021/README)

 Johan Andersson <johan@homemail.com>
 - Implemented HTTP proxy support.
 - Receive byte counter added.

 Bjorn Reese <breese@imada.ou.dk>
 - Implemented URLs (and skipped the old syntax).
 - Output is written to stdout, so to achieve the above example, do:
   httpget http://143.54.10.6/info_logo.gif > test.gif

Version 1.1
 Daniel Stenberg <Daniel.Stenberg@sth.frontec.se>
 - Adjusted it slightly to accept named hosts on the command line. We
   wouldn't wanna use IP numbers for the rest of our lifes, would we?

Version 1.0
  Rafael Sagula <sagula@inf.ufrgs.br>
  - Wrote the initial httpget, which started all this!