A topological sort of a set of revision ids currently is an expensive operation. Its runtime is linear in the size of the whole revision graph.
The idea is to assign a number to each revision in such a way that sorting revisions according to their number creates a topological sorting of those revisions at the same time.
These numbers are local. They cannot be shared among different databases, because you can't prevent collisions.
They are also not meant to be presented to the user, especially not to replace the revision hashes, but are used internally for optimizations.
Store the maximum of the maximum distances to the roots of the revision graph for every revision together with that revision. As we always have all ancestors for a revision in the local revision graph, this number (let's call it the 'height') is a constant for that revision and needs only be computed once.
Note that there might be more than one root in the local revision graph, and there is (at least one) longest path to each of them. The height is the maximum of the lengths of all these paths.
Now, if you have a set of revisions, you can topologically sort them by simply sorting them according to their heights. This is not more expensive than an integer sort and depends only on the size of the set you are sorting.
A 0 /|\ /|\ B C D 1 1 1 | |/ \ | |/ \ | E | | 2 | |/ \ | => |/ \ | F G | 3 3 | | |/ | |/ H I 4 4 : : : :
The ordering created by that algorithm doesn't always keep linear chains of revisions together. Instead, parallel running branches are zipped together in way that makes this sorting quite unusable e.g. for log output.
A 0 / \ / \ B C 1 1 | | | | D E => 2 2 | | | | F G 3 3 \ / \ / H 4
A topological sort of this graph's nodes according to their heights yields
ABCDEFGH which is - at least for log output - highly undesirable. We want either
Instead of simple integers we now use tuples of integers. We will write them in a dotted notation
a.b.c.... . We need an ordering for the tuples. The ordering is determined by the first different piece, starting with the leftmost element, considering a missing element lower than anything else. So
1.2.3 = 1.2.3,
1.2 < 1.3,
1.2 < 1.2.3,
1.4 > 1.3.1, and so on.
Now, consider a node with the number
a.b.c with not yet numbered children. The first child of this node will get the number
a.b.c+1, the other
N-1 children get the numbers
a.b.c.N-2.0. When a node has multiple parents, it gets the maximum of all numbers that would be assigned according to the rule in the previous sentence. Note that this asymmetrical: One child will get a tuple that has a size equal to that of the parent, while the other children get longer tuples.
Here is an example:
A 0----. /|\ / \ \ B C D 1 0.0.0 0.1.0 | |/ \ | | / \ | E | | 0.1.1 | |/ \ | => | / \ | F G | 2 0.1.2 | | |/ | \ / H I 3 0.1.3 : : : :
This schema has the advantage of keeping linear chains of revisions together. If a revision
A has only one child
B has only one parent
A, there will be no other revision with a number that sorts in between
B. The disadvantage is that sorting is a bit more expensive than with simple heights, because you have to compare tuples now. The tuples might become longer over time when there abandoned branches that accidentally got shorter tuples. This can't be avoided apriori. However, you can always renumber the whole graph, taking into account that very old branches will probably never be merged, and assign longer tuples to them. A good strategy is to identify a branch that likely will always be merged back (e.g.
n.v.m) and try to always assign the shorter tuples to revisions carrying that branch cert. For the current
nvm* graph, the longest tuples have 13 elements.
That's the example with problematic log output from above, now with complex heights:
A 0 / \ / \ B C 1 0.0.0 | | | | D E => 2 0.0.1 | | | | F G 3 0.0.2 \ / \ / H 4
The topological sort according to the node's heights yields
A complex height is an array of integers. If you store the integers subsequently and in big endian format, a comparison of the resulting byte sequence using memcmp() yields the correct result for the ordering described earlier. The advantage is that - once created - revision heights need not to be interpreted, but can be passed around as blobs. Moreover, you can even ask the database to perform comparisons for you:
SELECT ... FROM revisions JOIN heights ON revisions.id=heights.revision WHERE ... ORDER BY heights.height
This yields a toposorted list of revisions even for complex heights, because the database (at least SQLite) sorts the blobs using memcmp().
Low height numbers are much more common than high height numbers, so using four bytes for each height number may be wasteful. Instead, we could encode each integer using a prefix code such that low integers have short encodings. One such encoding is that of SQLite variable length integers, which are encoded in a big endian style such that the high bit of each byte indicates whether it is the last byte in the integer.
One disadvantage of this encoding is that we can no longer sort by string comparison, because 2^14^-1 encodes to FF 7F while 2^14^ encodes to 81 80 00. By choosing a different encoding, we can use string comparison again. The simplest such encoding works as follows: a k-byte encoded integer will consist of k-1 "1" bits, followed by a zero bit, followed by 7k "payload" bits encoding the integer itself in big-endian form. If each integer is encoded using the shortest possible encoding, then we can see that the property is preserved by considering two different cases:
- if a is longer than b, then b will have a zero bit earlier than one and sort earlier. Because the shortest possible encoding is always used, the longer word must be larger
- if a is the same length as b, then the prefixes will be the same and comparison of the integers will decide their order in the ordinary way.
If this technique is used, it would be wasted effort to convert between this representation and a more standard integer representation. It would be less work to directly implement the key operations of "compare", "append zero", and "increment last element" on this data structure. These operations can be directly implemented in a simple and efficient way.
We can get slightly more efficient encoding if we eliminate wasted encodings by considering longer strings to be "inherently" higher than shorter ones, so that the successor to the two-byte encoding BF FF is the three-byte encoding C0 00 00. This change would also make the direct implementation of the above paragraph simpler.