# A tree structure for log-time quadrant counting

This article describes a general form of data structure which permits the storing of two-dimensional data in such a way that a log-time lookup can reliably return the total amount of data in an arbitrary quadrant (i.e. a region both upward and leftward of a user-specified point).

The program `agedu`, which uses a data structure of this type for its on-disk index files, is used as a case study.

## 1. The problem

The basic problem can be stated as follows: you have a collection of `N` points, each specified as an `(x,y)` coordinate pair, and you want to be able to efficiently answer questions of the general form ‘How many points have both `x < x0` and `y < y0`?’, for arbitrary `(x0,y0)`.

A slightly enhanced form of the problem, which is no more difficult to solve, allows each point to have a weight as well as coordinates; now the question to be answered is ‘What is the total weight of the points with both `x < x0` and `y < y0`?’. The previous version of the question, of course, is a special case of this in which all weights are set to 1.

The data structure presented in this article answers any question of this type in time `O(log N)`.

## 2. The one-dimensional case

To begin with, we address the one-dimensional analogue of the problem. If we have a set of points which each have an `x`-coordinate and a weight, how do we represent them so as to be able to find the total weight of all points with `x < x0`?

An obvious solution is to sort the points into order of their `x`-coordinate, and alongside each point to list the total weight of everything up to and including that point. Then, given any `x0`, a log-time binary search will find the last point in the list with `x < x0`, and we can immediately read off the cumulative weight figure.

Now let's be more ambitious. What if the set of points were constantly changing – new points arriving, old points going away – and we wanted a data structure which would cope efficiently with dynamic updates while maintaining the ability to answer these count queries?

An efficient solution begins by storing the points in a balanced tree, sorted by their `x`-coordinates. Any of the usual types – AVL tree, red-black tree, the various forms of B-tree – will suffice. We then annotate every node of the tree with an extra field giving the total weight of the points in and below that node.

These annotations can be maintained through updates to the tree, without sacrificing the `O(log N)` time bound on insertions or deletions. So we can start with an empty tree and insert a complete set of points, or we can insert and delete points in arbitrary order, and the tree annotations will remain valid at all times.

A balanced tree sorted by `x` can easily be searched to find the last point with `x < x0`, for any `x0`. If the tree has total-weight annotations as described above, we can arrange that this search calculates the total weight of all points with `x < x0`: every time we examine a tree node and decide which subtree of it to descend to, we look at the top node of each subtree to the left of that one, and add their weight annotations to a running total. When we reach a leaf node and find our chosen subtree is `NULL`, the running total is the required answer.

So this data structure allows us to maintain a changing set of points in such a way that we can do an efficient one-dimensional count query at any time.

## 3. An incremental approach

Now we'll use the above one-dimensional structure to answer a restricted form of the original 2-D problem. Suppose we have some large set of `(x0,y0)` pairs for which we want to answer counted-quadrant queries, and suppose (for the moment) that we have all of the queries presented in advance. Is there any way we can do that efficiently?

There is. We sort our points by their `y`-coordinate, and then go through them in that order adding them one by one to a balanced tree sorted by `x` as described above.

So for any query coordinates `(x0,y0)`, there must be some moment during that process at which we have added to our tree precisely those points with `y < y0`. At that moment, we could search the tree to find the total weight of everything it contained with `x < x0`, and that would be the answer to our two-dimensional query.

Hence, if we also sorted our queries into order by `y0`, we could progress through the list of queries in parallel with the list of points (much like merging two sorted lists), answering each query at the moment when the tree contained just the right set of points to make it easy.

## 4. Copy-on-write

In real life, of course, we typically don't receive all our queries in a big batch up front. We want to construct a data structure capable of answering any query efficiently, and then be able to deal with queries as they arise.

A data structure capable of this, it turns out, is only a small step away from the one described in the previous section. The small step is copy-on-write.

As described in the previous section, we go through our list of points in order of their `y`-coordinate, and we add each one in turn to a balanced tree sorted by `x` with total-weight annotations. The catch is that, this time, we never modify any node in the tree: whenever the process of inserting an element into a tree wants to modify a node, we instead make a copy of that node containing the modified data. The parent of that node must now be modified (even if it would previously not have needed modification) so that it points at the copied node instead of the original – so we do the same thing there, and copy that one too, and so on up to the root.

So after we have done a single insertion by this method, we end up with a new tree root which describes the new tree – but the old tree root, describing the tree before the insertion, is still valid, because every node reachable from the old root is unchanged.

Therefore, if we start with an empty tree and repeatedly do this, we will end up with a distinct tree root for each point in the process, and all of them will be valid at once. It's as if we had done the incremental tree construction in the previous section, but could rewind time to any point in the process.

So now all we have to do is make a list of those tree roots, with their associated `y`-coordinates. Any way of doing this will do – another balanced tree, or a simple sorted list to be binary-searched, or something more exotic, whatever is convenient.

Then we can answer an arbitrary quadrant query using only a pair of log-time searches: given a query coordinate pair `(x0,y0)`, we look through our list of tree roots to find the one describing precisely the set of points with `y < y0`, and then do a one-dimensional count query into that tree for the total weight of points with `x < x0`. Done!

The nice thing about all of this is that nearly all the nodes in each tree are shared with the next one. Consider: since the operation of adding an element to a balanced tree takes `O(log N)` time, it must modify at most `O(log N)` nodes. Each of these node-copying insertion processes must copy all of those nodes, but need not copy any others – so it creates at most `O(log N)` new nodes. Hence, the total storage used by the combined set of trees is `O(N log N)`, much smaller than the `O(N^2 log N)` you'd expect if the trees were all separate or even mostly separate.

## 5. Limitations

The one-dimensional data structure described at the start of this article is dynamically updatable: if the set of points to be searched changes, the structure can be modified efficiently without losing its searching properties. The two-dimensional structure we ended up with is not: if a single point changes its coordinates or weight, or appears, or disappears, then the whole structure must be rebuilt.

Since the technique I used to add an extra dimension is critically dependent on the dynamic updatability of the one-dimensional base structure, but the resulting structure is not dynamically updatable in turn, it follows that this technique cannot be applied twice: no analogous transformation will construct a three-dimensional structure capable of counting the total weight of an octant `{x < x0, y < y0, z < z0}`. I know of no efficient way to do that.

The structure as described above uses `O(N log N)` storage. Many algorithms using `O(N log N)` time are considered efficient (e.g. sorting), but space is generally more expensive than time, and `O(N log N)` space is larger than you think!

## 6. An application: `agedu`

The application for which I developed this data structure is `agedu`, a program which scans a file system and indexes pathnames against last-access times (‘atimes’, in Unix terminology) so as to be able to point out directories which take up a lot of disk space but whose contents have not been accessed in years (making them strong candidates for tidying up or archiving to save space).

So the fundamental searching primitive we want for `agedu` is ‘What is the total size of the files contained within some directory path `Ptop` which have atime at most `T0`?’

We begin by sorting the files into order by their full pathname. This brings every subdirectory, at every level, together into a contiguous sequence of files. So now our query primitive becomes ‘What is the total size of files whose pathname falls between `P0` and `P1`, and whose atime is at most `T0`?’

Clearly, we can simplify this to the same type of quadrant query as discussed above, by splitting this into two subqueries: the total size of files with `P0 <= P < P1` and `T < T0` is clearly the total size of files with `P < P1, T < T0` minus the total size of files with `P < P0, T < T0`. Each of those subqueries is of precisely the type we have just derived a data structure to answer.

So we want to sort our files by two ‘coordinates’: one is atime, and the other is pathname (sorted in ASCII collation order). So which should be `x` and which `y`, in the above notation?

Well, either way round would work in principle, but two criteria inform the decision. Firstly, `agedu` typically wants to do many queries on the same pathname for different atimes, so as to build up a detailed profile of size against atime for a given subdirectory. So it makes sense to have the first-level lookup (on `y`, to find a tree root) be done on pathname, and the secondary lookup (on `x`, within that tree) be done on atime; then we can cache the tree root found in the first lookup, and use it many times without wasting time repeating the pathname search.

Another important point for `agedu` is that not all tree roots are actually used: the only pathnames ever submitted to a quadrant search are those at the start or the end of a particular subdirectory. This allows us to save a lot of disk space by limiting the copy-on-write behaviour: instead of never modifying an existing tree node, we now rule that we may modify a tree node if it has already been modified once since the last tree root we saved. In real-world cases, this cuts down the total space usage by about a factor of five, so it's well worthwhile – and we wouldn't be able to do it if we'd used atime as the `y`-coordinate instead of pathname.

Since the ‘`y`-coordinates’ in `agedu` are strings, the top-level lookup to find a tree root is most efficiently done using neither a balanced tree nor a sorted list, but a trie: tries allow lookup of a string in time proportional to the length of the string, whereas either of the other approaches would require `O(log N)` compare operations each of which could take time proportional to the length of the string.

Finally, the two-dimensions limitation on the above data structure unfortunately imposes limitations on what `agedu` can do. One would like to run a single `agedu` as a system-wide job on a file server (perhaps nightly or weekly), and present the results to all users in such a way that each user's view of the data was filtered to only what their access permissions permitted them to see. Sadly, to do this would require a third dimension in the data structure (representing ownership or access control, in some fashion), and hence cannot be done efficiently. `agedu` is therefore instead most sensibly used on demand by an individual user, so that it generates a custom data set for that user every time.