summaryrefslogtreecommitdiff
path: root/builtin-pack-objects.c
AgeCommit message (Collapse)Author
2007-07-13Pack-objects: properly initialize the depth valueNicolas Pitre
Commit 5a235b5e was missing this little detail. Otherwise your pack will explode. Problem noted by Brian Downing. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-07-12reduce git-pack-objects memory usage a little moreNicolas Pitre
The delta depth doesn't have to be stored in the global object array structure since it is only used during the deltification pass. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-07-12Add pack-objects window memory usage limitBrian Downing
This adds an option (--window-memory=N) and configuration variable (pack.windowMemory = N) to limit the memory size of the pack-objects delta search window. This works by removing the oldest unpacked objects whenever the total size goes above the limit. It will always leave at least one object, though, so as not to completely eliminate the possibility of computing deltas. This is an extra limit on top of the normal window size (--window=N); the window will not dynamically grow above the fixed number of entries specified to fill the memory limit. With this, repacking a repository with a mix of large and small objects is possible even with a very large window. Cleaner and correct circular buffer handling courtesy of Nicolas Pitre. Signed-off-by: Brian Downing <bdowning@lavos.net> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-07-12Don't try to delta if target is much smaller than sourceBrian Downing
Add a new try_delta heuristic. Don't bother trying to make a delta if the target object size is much smaller (currently 1/32) than the source, as it's very likely not going to get a match. Even if it does, you will have to read at least 32x the size of the new file to reassemble it, which isn't such a good deal. This leads to a considerable performance improvement when deltifying a mix of small and large files with a very large window, because you don't have to wait for the large files to percolate out of the window before things start going fast again. Signed-off-by: Brian Downing <bdowning@lavos.net> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-07-12apply delta depth bias to already deltified objectsNicolas Pitre
We already apply a bias on the initial delta attempt with max_size being a function of the base object depth. This has the effect of favoring shallower deltas even if deeper deltas could be smaller, and therefore creating a wider delta tree (see commits 4e8da195 and c3b06a69). This principle should also be applied to all delta attempts for the same object and not only the first attempt. With this the criteria for the best delta is not only its size but also its depth, so that a shallower delta might be selected even if it is larger than a deeper one. Even if some deltas get larger, they allow for wider delta trees making the depth limit less quickly reached and therefore better deltas can be subsequently found, keeping the resulting pack size even smaller. Runtime access to the pack should also benefit from shallower deltas. Testing on different repositories showed slighter faster repacks, smaller resulting packs, and a much nicer curve for delta depth distribution with no more peak at the maximum depth level. Improvements are even more significant with smaller depth limits. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-07-09pack-objects: Prefer shallower deltas if the size is equalBrian Downing
Change "try_delta" so that if it finds a delta that has the same size but shallower depth than the existing delta, it will prefer the shallower one. This makes certain delta trees vastly less deep. Signed-off-by: Brian Downing <bdowning@lavos.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-07War on whitespaceJunio C Hamano
This uses "git-apply --whitespace=strip" to fix whitespace errors that have crept in to our source files over time. There are a few files that need to have trailing whitespaces (most notably, test vectors). The results still passes the test, and build result in Documentation/ area is unchanged. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-02Unify write_index_file functionsGeert Bosch
This patch unifies the write_index_file functions in builtin-pack-objects.c and index-pack.c. As the name "index" is overloaded in git, move in the direction of using "idx" and "pack idx" when refering to the pack index. There should be no change in functionality. Signed-off-by: Geert Bosch <bosch@gnat.com> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-31fix repack with --max-pack-sizeNicolas Pitre
Two issues here: 1) git-repack -a --max-pack-size=10 on the GIT repo dies pretty quick. There is a lot of confusion about deltas that were suposed to be reused from another pack but that get stored undeltified due to pack limit and object size doesn't match entry->size anymore. This test is not really worth the complexity for determining when it is valid so get rid of it. 2) If pack limit is reached, the object buffer is freed, including when it comes from a cached delta data. In practice the object will be stored in a subsequent pack undeltified, but let's make sure no pointer to freed data subsists by clearing entry->delta_data. I also reorganized that code a bit to make it more readable. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-29builtin-pack-object: cache small deltasMartin Koegler
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-29git-pack-objects: cache small deltas between big objectsMartin Koegler
Creating deltas between big blobs is a CPU and memory intensive task. In the writing phase, all (not reused) deltas are redone. This patch adds support for caching deltas from the deltifing phase, so that that the writing phase is faster. The caching is limited to small deltas to avoid increasing memory usage very much. The implemented limit is (memory needed to create the delta)/1024. Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-29builtin-pack-objects: don't fail, if delta is not possibleMartin Koegler
If builtin-pack-objects runs out of memory while finding the best deltas, it bails out with an error. If the delta index creation fails (because there is not enough memory), we can downgrade the error message to a warning and continue with the next object. Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-29Merge branch 'dh/repack' (early part)Junio C Hamano
* 'dh/repack' (early part): Ensure git-repack -a -d --max-pack-size=N deletes correct packs pack-objects: clarification & option checks for --max-pack-size git-repack --max-pack-size: add option parsing to enable feature git-repack --max-pack-size: split packs as asked by write_{object,one}() git-repack --max-pack-size: write_{object,one}() respect pack limit git-repack --max-pack-size: new file statics and code restructuring Alter sha1close() 3rd argument to request flush only
2007-05-23pack-objects: clarification & option checks for --max-pack-sizeDana How
Explain the special code for detecting a corner-case error, and complain about --stdout & --max-pack-size being used together. Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-23builtin-pack-objects: remove unnecessary code for no-deltaJunio C Hamano
As we do not consider objects marked as "no-delta" early, there is no point to check if the other objects already in the delta window are marked as such -- "no-delta" objects will not enter the window to begin with. Pointed out by Nico. Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-22Teach "delta" attribute to pack-objects.Junio C Hamano
This teaches pack-objects to use .gitattributes mechanism so that the user can specify certain blobs are not worth spending CPU cycles to attempt deltification. The name of the attrbute is "delta", and when it is set to false, like this: == .gitattributes == *.jpg -delta they are always stored in the plain-compressed base object representation. Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-22pack-objects: pass fullname down to add_object_entry()Junio C Hamano
Instead of giving a hash for grouping, pass fullname to add_object_entry(). I want to add "do not try deltifying this object" bit to object_entry based on the settings in .gitattributes, and hashing the name down too early would interfere with that plan. Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-21git-repack --max-pack-size: add option parsing to enable featureDana L. How
Add --max-pack-size parsing and usage messages. Upgrade git-repack.sh to handle multiple packfile names, and build packfiles in GIT_OBJECT_DIRECTORY not GIT_DIR. Update documentation. Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-21git-repack --max-pack-size: split packs as asked by write_{object,one}()Dana L. How
Rewrite write_pack_file() to break to a new packfile whenever write_object/write_one request it, and correct the header's object count in the previous packfile. Change write_index_file() to write an index for just the objects in the most recent packfile. Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-21git-repack --max-pack-size: write_{object,one}() respect pack limitDana L. How
With --max-pack-size, generate the appropriate write limit for each object and check against it before each group of writes. Update delta usability rules to handle base being in a previously- written pack. Inline sha1write_compress() so we know the exact size of the written data when it needs to be compressed. Detect and return write "failure". Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-21git-repack --max-pack-size: new file statics and code restructuringDana L. How
Add "pack_size_limit", the limit specified by --max-pack-size, "written_list", the list of objects written to the current pack, and "nr_written", the number of objects in written_list. Put "base_name" at file scope again and add forward declarations. Move write_index_file() call from cnd_pack_objects() to write_pack_file() since only the latter will know how many times to call write_index_file(). Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-20Merge branch 'dh/pack'Junio C Hamano
* dh/pack: Custom compression levels for objects and packs
2007-05-10Custom compression levels for objects and packsDana How
Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-10deprecate the new loose object header formatNicolas Pitre
Now that we encourage and actively preserve objects in a packed form more agressively than we did at the time the new loose object format and core.legacyheaders were introduced, that extra loose object format doesn't appear to be worth it anymore. Because the packing of loose objects has to go through the delta match loop anyway, and since most of them should end up being deltified in most cases, there is really little advantage to have this parallel loose object format as the CPU savings it might provide is rather lost in the noise in the end. This patch gets rid of core.legacyheaders, preserve the legacy format as the only writable loose object format and deprecate the other one to keep things simpler. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-10allow for undeltified objects not to be reusedNicolas Pitre
Currently non deltified object data is always reused when possible. This means that any change to core.compression has no effect on those objects as they don't get recompressed when repacking them. Let's add a --no-reuse-object flag to git-repack in order to force recompression of all objects when desired. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09Increase pack.depth default to 50Theodore Ts'o
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09Add pack.depth option to git-pack-objects.Theodore Ts'o
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-07Use GIT_OBJECT_DIR for temporary files of pack-objectsAlex Riesen
Signed-off-by: Alex Riesen <raa.lkml@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-23make progress "title" part of the common progress interfaceNicolas Pitre
If the progress bar ends up in a box, better provide a title for it too. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-23common progress display supportNicolas Pitre
Instead of having this code duplicated in multiple places, let's have a common interface for progress display. If someday someone wishes to display a cheezy progress bar instead then only one file will have to be changed. Note: I left merge-recursive.c out since it has a strange notion of progress as it apparently increase the expected total number as it goes. Someone with more intimate knowledge of what that is supposed to mean might look at converting it to the common progress interface. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-23pack-objects: make generated packfile read-onlyJunio C Hamano
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-22Fix 'quickfix' on pack-objects.Junio C Hamano
The earlier quickfix forced world-readable permission bits. This updates it to honor umask and core.sharedrepository settings. Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-22pack-objects: quickfix for permission modes.Junio C Hamano
mkstemp() often creates the file in 0600 which means the resulting packfile is not readable by anybody other than the repository owner. Force 0644 for now, even though this is not strictly correct. Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-20pack-objects: remove obsolete commentsNicolas Pitre
The sorted-by-sha ans sorted-by-type arrays are no more. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: better check_object() performancesNicolas Pitre
With large amount of objects, check_object() is really trashing the pack sliding map and the filesystem cache. It has a completely random access pattern especially with old objects where delta replay jumps back and forth all over the pack. This patch improves things by: 1) sorting objects by their offset in pack before calling check_object() so the pack access pattern is linear; 2) recording the object type at add_object_entry() time since it is already known in most cases; 3) recording the pack offset even for preferred_base objects; 4) avoid calling sha1_object_info() if all possible. This limits pack accesses to the bare minimum and makes them perfectly linear. In the process check_object() was made more clear (to me at least). Note: I thought about walking the sorted_by_offset list backward in get_object_details() so if a pack happens to be larger than the available file cache, then the cache would have been populated with useful data from the beginning of the pack already when find_deltas() is called. Strangely, testing (on Linux) showed absolutely no performance difference. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: make in_pack_header_size a variable of its ownNicolas Pitre
It currently aliases delta_size on the principle that reused deltas won't go through the whole delta matching loop hence delta_size was unused. This is not true if given delta doesn't find its base in the pack though. But we need that information even for whole object data reuse. Well in short the current state looks awful and is prone to bugs. It just works fine now because try_delta() tests trg_entry->delta before using trg_entry->delta_size, but that is a bit subtle and I was wondering for a while why things just worked fine... even if I'm guilty of having introduced this abomination myself in the first place. Let's do the sensible thing instead with no ambiguity, which is to have a separate variable for in_pack_header_size. This might even help future optimizations. While at it, let's reorder some struct object_entry members so they all align well with their own width, regardless of the architecture or the size of off_t. Some memory saving is to be expected with this alone. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: get rid of create_final_object_list()Nicolas Pitre
Because we don't have to know the SHA1 h(hence the name) of the pack up front anymore, let's get rid of yet another global sorted object list and sort them only in write_index_file(), then compute the object list SHA1 on the fly. This has the advantage of saving another chunk of memory, and the sorted list SHA1 won't be computed needlessly on servers during a fetch. Of course the cunning plan is also to make write_index_file() much like the function with the same name in index-pack.c for an eventual easy sharing. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: get rid of reuse_cached_packNicolas Pitre
This capability is practically never useful, and therefore never tested, because it is fairly unlikely that the requested pack will be already available. Furthermore it is of little gain over the ability to reuse existing pack data. In fact the ability to change delta type on the fly when reusing delta data is a nice thing that has almost no cost and allows greater backward compatibility with a client's capabilities than if the client is blindly sent a whole pack without any discrimination. And this "feature" is simply in the way of other cleanups. Let's get rid of it. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: clean up list sortingNicolas Pitre
Get rid of sort_comparator() as it impose a run time double indirect function call for little compile time type checking gain. Also get rid of create_sorted_list() as it only has one user which would as well be just fine doing its sorting locally. Eventually the list of deltifiable objects might be shorter than the whole object list. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: rework check_delta_limit usageNicolas Pitre
Objects that have delta "children" from pack data reuse must consider the depth of their deepest child when they try to deltify themselves for those children not to become too deep. However, in the context of a "thin" pack, the delta children depth was skipped entirely on the presumption that the pack was always going to be exploded on the receiving end, hence the delta length wasn't an issue. Now that we keep received packs as is and reuse pack data when repacking, those packs do contain delta chains that are longer than expected. Worse, those delta chain may even grow longer when the pack is further repacked into another thin pack for a subsequent transmission. So this patch restores strict delta length even for thin packs, and it moves check_delta_limit() usage directly in the delta loop where it is needed. This way the delta_limit can be removed from struct object_entry as well. Oh and the initial value was wrong too. The progress_interval() function was moved to a more logical location in the process. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: equal objects in size should delta against newer objectsNicolas Pitre
Before finding best delta combinations, we sort objects by name hash, then by size, then by their position in memory. Then we walk the list backwards to test delta candidates. We hope that a bigger size usually means a newer objects. But a bigger address in memory does not mean a newer object. So the last comparison must be reversed. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-17pack-objects: optimize preferred base handling a bitNicolas Pitre
Let's avoid some cycles when there is no base to test against, and avoid unnecessary object lookups. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-12clean up add_object_entry()Nicolas Pitre
This function used to call locate_object_entry_hash() _twice_ per added object while only once should suffice. Let's reorganize that code a bit. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10validate reused pack data with CRC when possibleNicolas Pitre
This replaces the inflate validation with a CRC validation when reusing data from a pack which uses index version 2. That makes repacking much safer against corruptions, and it should be a bit faster too. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10allow forcing index v2 and 64-bit offset tresholdNicolas Pitre
This is necessary for testing the new capabilities in some automated way without having an actual 4GB+ pack. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10pack-objects: learn about pack index version 2Nicolas Pitre
Pack index version 2 goes as follows: - 8 bytes of header with signature and version. - 256 entries of 4-byte first-level fan-out table. - Table of sorted 20-byte SHA1 records for each object in pack. - Table of 4-byte CRC32 entries for raw pack object data. - Table of 4-byte offset entries for objects in the pack if offset is representable with 31 bits or less, otherwise it is an index in the next table with top bit set. - Table of 8-byte offset entries indexed from previous table for offsets which are 32 bits or more (optional). - 20-byte SHA1 checksum of sorted object names. - 20-byte SHA1 checksum of the above. The object SHA1 table is all contiguous so future pack format that would contain this table directly won't require big changes to the code. It is also tighter for slightly better cache locality when looking up entries. Support for large packs exceeding 31 bits in size won't impose an index size bloat for packs within that range that don't need a 64-bit offset. And because newer objects which are likely to be the most frequently used are located at the beginning of the pack, they won't pay the 64-bit offset lookup at run time either even if the pack is large. Right now an index version 2 is created only when the biggest offset in a pack reaches 31 bits. It might be a good idea to always use index version 2 eventually to benefit from the CRC32 it contains when reusing pack data while repacking. [jc: with the "oops" fix to keep track of the last offset correctly] Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10compute a CRC32 for each object as stored in a packNicolas Pitre
The most important optimization for performance when repacking is the ability to reuse data from a previous pack as is and bypass any delta or even SHA1 computation by simply copying the raw data from one pack to another directly. The problem with this is that any data corruption within a copied object would go unnoticed and the new (repacked) pack would be self-consistent with its own checksum despite containing a corrupted object. This is a real issue that already happened at least once in the past. In some attempt to prevent this, we validate the copied data by inflating it and making sure no error is signaled by zlib. But this is still not perfect as a significant portion of a pack content is made of object headers and references to delta base objects which are not deflated and therefore not validated when repacking actually making the pack data reuse still not as safe as it could be. Of course a full SHA1 validation could be performed, but that implies full data inflating and delta replaying which is extremely costly, which cost the data reuse optimization was designed to avoid in the first place. So the best solution to this is simply to store a CRC32 of the raw pack data for each object in the pack index. This way any object in a pack can be validated before being copied as is in another pack, including header and any other non deflated data. Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia: Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very short messages. He wrote "Briefly, the problem is that, for very short packets, Adler32 is guaranteed to give poor coverage of the available bits. Don't take my word for it, ask Mark Adler. :-)" The problem is that sum A does not wrap for short messages. The maximum value of A for a 128-byte message is 32640, which is below the value 65521 used by the modulo operation. An extended explanation can be found in RFC 3309, which mandates the use of CRC32 instead of Adler-32 for SCTP, the Stream Control Transmission Protocol. In the context of a GIT pack, we have lots of small objects, especially deltas, which are likely to be quite small and in a size range for which Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the possibility for recovery from certain types of small corruptions like single bit errors which are the most probable type of corruptions. OK what this patch does is to compute the CRC32 of each object written to a pack within pack-objects. It is not written to the index yet and it is obviously not validated when reusing pack data yet either. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10add overflow tests on pack offset variablesNicolas Pitre
Change a few size and offset variables to more appropriate type, then add overflow tests on those offsets. This prevents any bad data to be generated/processed if off_t happens to not be large enough to handle some big packs. Better be safe than sorry. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10make overflow test on delta base offset work regardless of variable sizeNicolas Pitre
This patch introduces the MSB() macro to obtain the desired number of most significant bits from a given variable independently of the variable type. It is then used to better implement the overflow test on the OBJ_OFS_DELTA base offset variable with the property of always working correctly regardless of the type/size of that variable. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10get rid of num_packed_objects()Nicolas Pitre
The coming index format change doesn't allow for the number of objects to be determined from the size of the index file directly. Instead, Let's initialize a field in the packed_git structure with the object count when the index is validated since the count is always known at that point. While at it let's reorder some struct packed_git fields to avoid padding due to needed 64-bit alignment for some of them. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>