summaryrefslogtreecommitdiff
path: root/remote-curl.c
AgeCommit message (Collapse)Author
2011-10-18Merge branch 'jk/http-auth'Junio C Hamano
* jk/http-auth: http_init: accept separate URL parameter http: use hostname in credential description http: retry authentication failures for all http requests remote-curl: don't retry auth failures with dumb protocol improve httpd auth tests url: decode buffers that are not NUL-terminated
2011-10-16http_init: accept separate URL parameterJeff King
The http_init function takes a "struct remote". Part of its initialization procedure is to look at the remote's url and grab some auth-related parameters. However, using the url included in the remote is: - wrong; the remote-curl helper may have a separate, unrelated URL (e.g., from remote.*.pushurl). Looking at the remote's configured url is incorrect. - incomplete; http-fetch doesn't have a remote, so passes NULL. So http_init never gets to see the URL we are actually going to use. - cumbersome; http-push has a similar problem to http-fetch, but actually builds a fake remote just to pass in the URL. Instead, let's just add a separate URL parameter to http_init, and all three callsites can pass in the appropriate information. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-12Merge branch 'sp/smart-http-failure'Junio C Hamano
* sp/smart-http-failure: remote-curl: Fix warning after HTTP failure
2011-10-05remote-curl: Fix warning after HTTP failureShawn O. Pearce
If the HTTP connection is broken in the middle of a fetch or clone body, the client presented a useless error message due to part of the upload-pack->remote-curl pkt-line protocol leaking out of the helper as the helper's "fetch result": error: RPC failed; result=18, HTTP code = 200 fatal: The remote end hung up unexpectedly fatal: early EOF fatal: unpack-objects failed warning: https unexpectedly said: '0000' Instead when the HTTP RPC fails discard all remaining data from upload-pack and report nothing to the transport helper. Errors were already sent to stderr. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-06Sync with 1.7.6.2Junio C Hamano
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-06Revert "Merge branch 'cb/maint-quiet-push' into maint"Junio C Hamano
This reverts commit ffa69e61d3c5730bd4b65a465efc130b0ef3c7df, reversing changes made to 4a13c4d14841343d7caad6ed41a152fee550261d. Adding a new command line option to receive-pack and feed it from send-pack is not an acceptable way to add features, as there is no guarantee that your updated send-pack will be talking to updated receive-pack. New features need to be added via the capability mechanism negotiated over the protocol. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-23Merge branch 'cb/maint-quiet-push' into maintJunio C Hamano
* cb/maint-quiet-push: receive-pack: do not overstep command line argument array propagate --quiet to send-pack/receive-pack Conflicts: Documentation/git-receive-pack.txt Documentation/git-send-pack.txt
2011-08-18Merge branch 'cb/maint-quiet-push'Junio C Hamano
* cb/maint-quiet-push: receive-pack: do not overstep command line argument array propagate --quiet to send-pack/receive-pack Conflicts: Documentation/git-receive-pack.txt Documentation/git-send-pack.txt
2011-08-16Merge branch 'jc/zlib-wrap' into maintJunio C Hamano
* jc/zlib-wrap: zlib: allow feeding more than 4GB in one go zlib: zlib can only process 4GB at a time zlib: wrap deflateBound() too zlib: wrap deflate side of the API zlib: wrap inflateInit2 used to accept only for gzip format zlib: wrap remaining calls to direct inflate/inflateEnd zlib wrapper: refactor error message formatter
2011-08-01Merge branch 'sr/transport-helper-fix'Junio C Hamano
* sr/transport-helper-fix: (21 commits) transport-helper: die early on encountering deleted refs transport-helper: implement marks location as capability transport-helper: Use capname for refspec capability too transport-helper: change import semantics transport-helper: update ref status after push with export transport-helper: use the new done feature where possible transport-helper: check status code of finish_command transport-helper: factor out push_update_refs_status fast-export: support done feature fast-import: introduce 'done' command git-remote-testgit: fix error handling git-remote-testgit: only push for non-local repositories remote-curl: accept empty line as terminator remote-helpers: export GIT_DIR variable to helpers git_remote_helpers: push all refs during a non-local export transport-helper: don't feed bogus refs to export push git-remote-testgit: import non-HEAD refs t5800: document some non-functional parts of remote helpers t5800: use skip_all instead of prereq t5800: factor out some ref tests ...
2011-08-01propagate --quiet to send-pack/receive-packClemens Buchacher
Currently, git push --quiet produces some non-error output, e.g.: $ git push --quiet Unpacking objects: 100% (3/3), done. Add the --quiet option to send-pack/receive-pack and pass it to unpack-objects in the receive-pack codepath and to receive-pack in the push codepath. This fixes a bug reported for the fedora git package: https://bugzilla.redhat.com/show_bug.cgi?id=725593 Reported-by: Jesse Keating <jkeating@redhat.com> Cc: Todd Zullinger <tmz@pobox.com> Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-22Merge branch 'maint'Junio C Hamano
* maint: doc/fast-import: clarify notemodify command Documentation: minor grammatical fix in rev-list-options.txt Documentation: git-filter-branch honors replacement refs remote-curl: Add a format check to parsing of info/refs git-config: Remove extra whitespaces
2011-07-20remote-curl: Add a format check to parsing of info/refsJulian Phillips
When parsing info/refs, no checks were applied that the file was in the requried format. Since the file is read from a remote webserver, this isn't guarenteed to be true. Add a check that the file at least only contains lines that consist of 40 characters followed by a tab and then the ref name. Signed-off-by: Julian Phillips <julian@quantumfyre.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-20remote-curl: don't retry auth failures with dumb protocolJeff King
When fetching an http URL, we first try fetching info/refs with an extra "service" parameter. This will work for a smart-http server, or a dumb server which ignores extra parameters when fetching files. If that fails, we retry without the extra parameter to remain compatible with dumb servers which didn't like our first request. If the server returned a "401 Unauthorized", indicating that the credentials we provided were not good, there is not much point in retrying. With the current code, we just waste an extra round trip to the HTTP server before failing. But as the http code becomes smarter about throwing away rejected credentials and re-prompting the user for new ones (which it will later in this series), this will become more confusing. At some point we will stop asking for credentials to retry smart http, and will be asking for credentials to retry dumb http. So now we're not only wasting an extra HTTP round trip for something that is unlikely to work, but we're making the user re-type their password for it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-19remote-curl: accept empty line as terminatorSverre Rabbelier
This went unnoticed because the transport helper infrastructore did not check the return value of the helper, nor did the helper print anything before exiting. While at it also make sure that the stream doesn't end unexpectedly. Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-19Merge branch 'jc/zlib-wrap'Junio C Hamano
* jc/zlib-wrap: zlib: allow feeding more than 4GB in one go zlib: zlib can only process 4GB at a time zlib: wrap deflateBound() too zlib: wrap deflate side of the API zlib: wrap inflateInit2 used to accept only for gzip format zlib: wrap remaining calls to direct inflate/inflateEnd zlib wrapper: refactor error message formatter Conflicts: sha1_file.c
2011-06-20plug a few coverity-spotted leaksJim Meyering
Signed-off-by: Jim Meyering <meyering@redhat.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10zlib: zlib can only process 4GB at a timeJunio C Hamano
The size of objects we read from the repository and data we try to put into the repository are represented in "unsigned long", so that on larger architectures we can handle objects that weigh more than 4GB. But the interface defined in zlib.h to communicate with inflate/deflate limits avail_in (how many bytes of input are we calling zlib with) and avail_out (how many bytes of output from zlib are we ready to accept) fields effectively to 4GB by defining their type to be uInt. In many places in our code, we allocate a large buffer (e.g. mmap'ing a large loose object file) and tell zlib its size by assigning the size to avail_in field of the stream, but that will truncate the high octets of the real size. The worst part of this story is that we often pass around z_stream (the state object used by zlib) to keep track of the number of used bytes in input/output buffer by inspecting these two fields, which practically limits our callchain to the same 4GB limit. Wrap z_stream in another structure git_zstream that can express avail_in and avail_out in unsigned long. For now, just die() when the caller gives a size that cannot be given to a single zlib call. In later patches in the series, we would make git_inflate() and git_deflate() internally loop to give callers an illusion that our "improved" version of zlib interface can operate on a buffer larger than 4GB in one go. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10zlib: wrap deflateBound() tooJunio C Hamano
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10zlib: wrap deflate side of the APIJunio C Hamano
Wrap deflateInit, deflate, and deflateEnd for everybody, and the sole use of deflateInit2 in remote-curl.c to tell the library to use gzip header and trailer in git_deflate_init_gzip(). There is only one caller that cares about the status from deflateEnd(). Introduce git_deflate_end_gently() to let that sole caller retrieve the status and act on it (i.e. die) for now, but we would probably want to make inflate_end/deflate_end die when they ran out of memory and get rid of the _gently() kind. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-05-04http: make curl callbacks match contracts from curl headerDan McGee
Yes, these don't match perfectly with the void* first parameter of the fread/fwrite in the standard library, but they do match the curl expected method signature. This is needed when a refactor passes a curl_write_callback around, which would otherwise give incorrect parameter warnings. Signed-off-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-14Merge branch 'sp/maint-smart-http-sans-100-continue'Junio C Hamano
* sp/maint-smart-http-sans-100-continue: smart-http: Really never use Expect: 100-continue
2011-03-14smart-http: Really never use Expect: 100-continueShawn O. Pearce
libcurl may choose to try and use Expect: 100-continue for any type of POST, not just a Transfer: chunked-encoding type. Force it to disable this feature, as not all proxy servers support 100-continue and leaving it enabled can cause 1 second stalls during the negotiation phase of fetch-pack/upload-pack. In ("206b099d26 smart-http: Don't use Expect: 100-Continue") we tried to disable this for only large POST bodies, but it should be disabled for every POST body. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-02-28Merge branch 'sp/maint-smart-http-sans-100-continue'Junio C Hamano
* sp/maint-smart-http-sans-100-continue: smart-http: Don't use Expect: 100-Continue
2011-02-15smart-http: Don't use Expect: 100-ContinueShawn O. Pearce
Some HTTP/1.1 servers or proxies don't correctly implement the 100-Continue feature of HTTP/1.1. Its a difficult feature to implement right, and isn't commonly used by browsers, so many developers may not even be aware that their server (or proxy) doesn't honor it. Within the smart HTTP protocol for Git we only use this newer "Expect: 100-Continue" feature to probe for missing authentication before uploading a large payload like a pack file during push. If authentication is necessary, we expect the server to send the 401 Not Authorized response before the bulk data transfer starts, thus saving the client bandwidth during the retry. A different method to probe for working authentication is to send an empty command list (that is just "0000") to $URL/git-receive-pack. or $URL/git-upload-pack. All versions of both receive-pack and upload-pack since the introduction of smart HTTP in Git 1.6.6 cleanly accept just a flush-pkt under --stateless-rpc mode, and exit with success. If HTTP level authentication is successful, the backend will return an empty response, but with HTTP status code 200. This enables the client to continue with the transfer. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-13Merge branch 'sp/fix-smart-http-deadlock-on-error'Junio C Hamano
* sp/fix-smart-http-deadlock-on-error: smart-http: Don't deadlock on server failure
2010-08-06smart-http: Don't deadlock on server failureShawn O. Pearce
If the remote HTTP server fails (e.g. returns 404 or 500) when we posted the RPC to it, we won't have sent anything to the background Git process that is supposed to handle the stream. Because we didn't send anything, its waiting for input from remote-curl, and remote-curl cannot read its response payload because doing so would lead to a deadlock. Send the background task EOF on its input before we try to read its response back, that way it will break out of its read loop and terminate. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-05-09Merge branch 'rc/maint-curl-helper'Junio C Hamano
* rc/maint-curl-helper: remote-curl: ensure that URLs have a trailing slash http: make end_url_with_slash() public t5541-http-push: add test for URLs with trailing slash Conflicts: remote-curl.c
2010-04-10remote-curl: ensure that URLs have a trailing slashTay Ray Chuan
Previously, we blindly assumed that URLs passed to the remote-curl helper did not end with a trailing slash. Use the convenience function end_url_with_slash() from http.[ch] to ensure that URLs have a trailing slash on invocation of the remote-curl helper, and use the URL as one with a trailing slash throughout. It is possible for users to pass a URL with a trailing slash to remote-curl, by, say, setting it in remote.<name>.url in their git config. The resulting requests have an empty path component (//) and may break implementations of the http git protocol. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-02Prompt for a username when an HTTP request 401sScott Chacon
When an HTTP request returns a 401, Git will currently fail with a confusing message saying that it got a 401, which is not very descriptive. Currently if a user wants to use Git over HTTP, they have to use one URL with the username in the URL (e.g. "http://user@host.com/repo.git") for write access and another without the username for unauthenticated read access (unless they want to be prompted for the password each time). However, since the HTTP servers will return a 401 if an action requires authentication, we can prompt for username and password if we see this, allowing us to use a single URL for both purposes. This patch changes http_request to prompt for the username and password, then return HTTP_REAUTH so http_get_strbuf can try again. If it gets a 401 even when a user/pass is supplied, http_request will now return HTTP_NOAUTH which remote_curl can then use to display a more intelligent error message that is less confusing. Signed-off-by: Scott Chacon <schacon@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-15Merge branch 'tc/http-cleanup'Junio C Hamano
* tc/http-cleanup: remote-curl: init walker only when needed remote-curl: use http_fetch_ref() instead of walker wrapper http: init and cleanup separately from http-walker http-walker: cleanup more thoroughly http-push: remove "|| 1" to enable verbose check t554[01]-http-push: refactor, add non-ff tests t5541-http-push: check that ref is unchanged for non-ff test
2010-03-03Merge branch 'sp/maint-push-sideband' into maintJunio C Hamano
* sp/maint-push-sideband: receive-pack: Send internal errors over side-band #2 t5401: Use a bare repository for the remote peer receive-pack: Send hook output over side band #2 receive-pack: Wrap status reports inside side-band-64k receive-pack: Refactor how capabilities are shown to the client send-pack: demultiplex a sideband stream with status data run-command: support custom fd-set in async run-command: Allow stderr to be a caller supplied pipe Conflicts: builtin-receive-pack.c run-command.c t/t5401-update-hooks.sh
2010-03-02remote-curl: init walker only when neededTay Ray Chuan
Invoke get_http_walker() only when fetching with the dumb protocol. Additionally, add an invocation to walker_free() after we're done using the walker. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02remote-curl: use http_fetch_ref() instead of walker wrapperTay Ray Chuan
The http-walker implementation of walker->fetch_ref() doesn't do anything special compared to http_fetch_ref() anyway. Remove init_walker() invocation before fetching the ref, since we aren't using the walker wrapper and don't need a walker instance anymore. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02http: init and cleanup separately from http-walkerTay Ray Chuan
Previously, all our http operations were done with http-walker. With the new remote-curl helper, we find ourselves using http methods outside of http-walker - for example, fetching info/refs. Accomodate this by separating http_init() and http_cleanup() invocations from http-walker. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-06Merge branch 'sp/maint-push-sideband' into sp/push-sidebandJunio C Hamano
* sp/maint-push-sideband: receive-pack: Send hook output over side band #2 receive-pack: Wrap status reports inside side-band-64k receive-pack: Refactor how capabilities are shown to the client send-pack: demultiplex a sideband stream with status data run-command: support custom fd-set in async run-command: Allow stderr to be a caller supplied pipe Update git fsck --full short description to mention packs Conflicts: run-command.c
2010-02-06run-command: support custom fd-set in asyncErik Faye-Lund
This patch adds the possibility to supply a set of non-0 file descriptors for async process communication instead of the default-created pipe. Additionally, we now support bi-directional communiction with the async procedure, by giving the async function both read and write file descriptors. To retain compatiblity and similar "API feel" with start_command, we require start_async callers to set .out = -1 to get a readable file descriptor. If either of .in or .out is 0, we supply no file descriptor to the async process. [sp: Note: Erik started this patch, and a huge bulk of it is his work. All bugs were introduced later by Shawn.] Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-22Merge branch 'maint'Junio C Hamano
* maint: merge-recursive: do not return NULL only to cause segfault retry request without query when info/refs?query fails
2010-01-21retry request without query when info/refs?query failsTay Ray Chuan
When "info/refs" is a static file and not behind a CGI handler, some servers may not handle a GET request for it with a query string appended (eg. "?foo=bar") properly. If such a request fails, retry it sans the query string. In addition, ensure that the "smart" http protocol is not used (a service has to be specified with "?service=<service name>" to be conformant). Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Reported-and-tested-by: Yaroslav Halchenko <debian@onerussian.com> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-20Merge branch 'jc/symbol-static'Junio C Hamano
* jc/symbol-static: date.c: mark file-local function static Replace parse_blob() with an explanatory comment symlinks.c: remove unused functions object.c: remove unused functions strbuf.c: remove unused function sha1_file.c: remove unused function mailmap.c: remove unused function utf8.c: mark file-local function static submodule.c: mark file-local function static quote.c: mark file-local function static remote-curl.c: mark file-local function static read-cache.c: mark file-local functions static parse-options.c: mark file-local function static entry.c: mark file-local function static http.c: mark file-local functions static pretty.c: mark file-local function static builtin-rev-list.c: mark file-local function static bisect.c: mark file-local function static
2010-01-12Merge branch 'maint'Junio C Hamano
* maint: remote-curl: Fix Accept header for smart HTTP connections grep: -L should show empty files rebase--interactive: Ignore comments and blank lines in peek_next_command
2010-01-12remote-curl: Fix Accept header for smart HTTP connectionsShawn O. Pearce
We actually expect to see an application/x-git-upload-pack-result but we lied and said we Accept *-response. This was a typo on my part when I was writing the code. Fortunately the wrong Accept header had no real impact, as the deployed git-http-backend servers were not testing the Accept header before they returned their content. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12remote-curl.c: mark file-local function staticJunio C Hamano
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-12-01Allow curl to rewind the RPC read bufferMartin Storsjö
When using multi-pass authentication methods, the curl library may need to rewind the read buffers used for providing data to HTTP POST, if data has been output before a 401 error is received. This is needed only when the first request (when the multi-pass authentication method isn't initialized and hasn't received its challenge yet) for a certain curl session is a chunked HTTP POST. As long as the current rpc read buffer is the first one, we're able to rewind without need for additional buffering. The curl library currently starts sending data without waiting for a response to the Expect: 100-continue header, due to a bug in curl that exists up to curl version 7.19.7. If the HTTP server doesn't handle Expect: 100-continue headers properly (e.g. Lighttpd), the library has to start sending data without knowing if the request will be successfully authenticated. In this case, this rewinding solution is not sufficient - the whole request will be sent before the 401 error is received. Signed-off-by: Martin Storsjo <martin@martin.st> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-24remote-curl.c: fix rpc_out()Tay Ray Chuan
Remove the extraneous semicolon (';') at the end of the if statement that allowed the code in its block to execute regardless of the condition. This fixes pushing to a smart http backend with chunked encoding. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-23Disable CURLOPT_NOBODY before enabling CURLOPT_PUT and CURLOPT_POSTMartin Storsjö
This works around a bug in curl versions up to 7.19.4, where disabling the CURLOPT_NOBODY option sets the internal state incorrectly considering that CURLOPT_PUT was enabled earlier. The bug is discussed at http://curl.haxx.se/bug/view.cgi?id=2727981 and is corrected in the latest version of curl in CVS. This bug usually has no impact on git, but may surface if using multi-pass authentication methods. Signed-off-by: Martin Storsjo <martin@martin.st> Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-21Merge branch 'sp/smart-http'Junio C Hamano
* sp/smart-http: (37 commits) http-backend: Let gcc check the format of more printf-type functions. http-backend: Fix access beyond end of string. http-backend: Fix bad treatment of uintmax_t in Content-Length t5551-http-fetch: Work around broken Accept header in libcurl t5551-http-fetch: Work around some libcurl versions http-backend: Protect GIT_PROJECT_ROOT from /../ requests Git-aware CGI to provide dumb HTTP transport http-backend: Test configuration options http-backend: Use http.getanyfile to disable dumb HTTP serving test smart http fetch and push http tests: use /dumb/ URL prefix set httpd port before sourcing lib-httpd t5540-http-push: remove redundant fetches Smart HTTP fetch: gzip requests Smart fetch over HTTP: client side Smart push over HTTP: client side Discover refs via smart HTTP server when available http-backend: more explict LocationMatch http-backend: add example for gitweb on same URL http-backend: use mod_alias instead of mod_rewrite ... Conflicts: .gitignore remote-curl.c
2009-11-05Smart HTTP fetch: gzip requestsShawn O. Pearce
The upload-pack requests are mostly plain text and they compress rather well. Deflating them with Content-Encoding: gzip can easily drop the size of the request by 50%, reducing the amount of data to transfer as we negotiate the common commits. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-05Smart fetch over HTTP: client sideShawn O. Pearce
The git-remote-curl backend detects if the remote server supports the git-upload-pack service, and if so, runs git-fetch-pack locally in a pipe to generate the want/have commands. The advertisements from the server that were obtained during the discovery are passed into git-fetch-pack before the POST request starts, permitting server capability discovery and enablement. Common objects that are discovered are appended onto the request as have lines and are sent again on the next request. This allows the remote side to reinitialize its in-memory list of common objects during the next request. Because all requests are relatively short, below git-remote-curl's 1 MiB buffer limit, requests will use the standard Content-Length header and be valid HTTP/1.0 POST requests. This makes the fetch client more tolerant of proxy servers which don't support HTTP/1.1 or the chunked transfer encoding. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-05Smart push over HTTP: client sideShawn O. Pearce
The git-remote-curl backend detects if the remote server supports the git-receive-pack service, and if so, runs git-send-pack in a pipe to dump the command and pack data as a single POST request. The advertisements from the server that were obtained during the discovery are passed into git-send-pack before the POST request starts. This permits git-send-pack to operate largely unmodified. For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a Content-Length is used, permitting interaction with any server. The 1 MiB limit is arbitrary, but is sufficent to fit most deltas created by human authors against text sources with the occasional small binary file (e.g. few KiB icon image). The configuration option http.postBuffer can be used to increase (or shink) this buffer if the default is not sufficient. For larger packs which cannot be spooled entirely into the helper's memory space (due to http.postBuffer being too small), the POST request requires HTTP/1.1 and sets "Transfer-Encoding: chunked". This permits the client to upload an unknown amount of data in one HTTP transaction without needing to pregenerate the entire pack file locally. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>