Binary search trees are used quite often for storing or finding values. Like a binary search, they essentially work by sorting the items.
In this post I will describe a search tree that does not require that the items be sorted. Hence, the tree can support some interesting queries. The queries will always be correct, but they will only be fast in some cases.
Usually, to make searching fast, each branch in a search tree stores information that helps to decide whether to go left or right. But if we want to be able to construct a tree for any possible type of query, then that is not always possible. Instead, we can still aim to eliminate large parts of the search space, by storing bounds.
Suppose we have a tree that stores integers, and we want to find the first item in the tree that is greater or equal to some query integer. In each branch of the tree, we could store the maximum of all values in that subtree. Call it the upper bound of the subtree. If this upper bound is less than the query, then we can eliminate the entire subtree from consideration.
Now let's generalize that. The maximum value is an example of a semilattice. That is just a fancy way of saying that for a pair of values we can get some kind of bound. As a typeclass it looks like
class Semilattice a where meet :: a -> a -> a -- Laws: meet is associative, commutative and idempotent: -- meet a (meet b c) = meet (meet a b) c -- meet a b = meet b a -- meet a a = a
The queries we perform on the tree should of course work together with the bounds. That means that if a bound for a branch in the tree doesn't satisfy the query, then none of the values in the subtree do. In haskell terms:
class Semilattice a => Satisfy q a | q -> a where satisfy :: q -> a -> Bool -- Law: satisfy q a || satisfy q b ==> satisfy q (meet a b)
Note that a semilattice always gives a partial order, and hence a satisfy function by
satisfy q a = meet q a == a
because
satisfy q a || satisfy q b <=> meet q a == a || meet q b == b ==> meet (meet q a) b == meet a b || meet a (meet q b) == meet a b <=> meet q (meet a b) == meet a b || meet q (meet a b) == meet a b <=> meet q (meet a b) == meet a b <=> satisfy q (meet a b)
However, I keep the distinction between the query and value type for more flexibility and for more descriptive types.
Given the Satisfy and Semilattice typeclasses, the search tree datastructure is straight forward. A search tree can be empty, a single value, or a branch. In each branch we store the bound of that subtree.
data SearchTree a = Empty | Leaf !a | Branch !a (SearchTree a) (SearchTree a) deriving (Show) bound :: SearchTree a -> a bound (Leaf a) = a bound (Branch a _ _) = a bound Empty = error "bound Empty"
If we have a SearchTree, then we can find the first element that satisfies a query, simply by searching both sides of each branch. The trick to making the search faster is to only continue as long as the bound satisfies the query:
-- Find the first element in the tree that satisfies the query findFirst :: Satisfy q a => q -> SearchTree a -> Maybe a findFirst q (Leaf a) | satisfy q a = Just a findFirst q (Branch a x y) | satisfy q a = findFirst q x `mplus` findFirst q y findFirst _ _ = Nothing
Completely analogously, we can find the last satisfied item instead:
-- Find the last element in the tree that satisfies the query findLast :: Satisfy q a => q -> SearchTree a -> Maybe a findLast q (Leaf a) | satisfy q a = Just a findLast q (Branch a x y) | satisfy q a = findLast q y `mplus` findLast q x findLast _ _ = Nothing
Or we can even generalize this search to any Monoid, where the above are for the First and Last monoids respectively. I will this leave as an exercise for the reader.
The basis of each tree are branches. We will always construct branches with a smart constructor that calculates the bound as the meet of the bounds of its two arguments. That way, the stored bound is always correct.
mkBranch :: Semilattice a => SearchTree a -> SearchTree a -> SearchTree a mkBranch Empty y = y mkBranch x Empty = x mkBranch x y = Branch (bound x `meet` bound y) x y
A search will always take time at least linear in the depth of the tree. So, for fast searches we need a balanced tree, where each subtree has roughly the same size. Here is arguably the most tricky part of the code, which converts a list to a balanced search tree.
-- /O(n*log n)/ -- Convert a list to a balanced search tree fromList :: Semilattice a => [a] -> SearchTree a fromList [] = Empty fromList [x] = Leaf x fromList xs = mkBranch (fromList ys) (fromList zs) where (ys,zs) = splitAt (length xs `div` 2) xs
And that's it. I use this data structure for finding rectangles (more about that in a future post), and there I only needed to build the search structure once, and use it multiple times. So, in this post I am not going to talk about updates at all. If you wanted to do updates efficiently, then you would need to worry about updating bounds, rebalancing etc.
Here is an example of the search tree in action. The query will be to find a value (>= q) for a given q. The bounds will be maximum values.
newtype Max a = Max { getMax :: a } deriving (Show) instance Ord a => Semilattice (Max a) where meet (Max a) (Max b) = Max (max a b) newtype Ge a = Ge a deriving (Show) instance Ord a => Satisfy (Ge a) (Max a) where satisfy (Ge q) = (>= q) . getMax
First, check the satisfy law:
satisfy (Ge q) (Max a) || satisfy (Ge q) (Max b) <=> a >= q || b >= q <=> if a >= b then a >= q else b >= q <=> (if a >= b then a else b) >= q <=> max a b >= q <=> satisfy (Ge q) (Max (max a b)) <=> satisfy (Ge q) (meet (Max a) (Max b))
So indeed, satisfy q a || satisfy q b ==> satisfy q (meet a b). And this bound is in fact tight, so also the other way around satisfy q (meet a b) ==> satisfy q a || satisfy q b. This will become important later.
Now here are some example queries:
λ> findFirst (Ge 3) (fromList $ map Max [1,2,3,4,5]) Just (Max 3) λ> findFirst (Ge 3) (fromList $ map Max [2,4,6]) Just (Max 4) λ> findFirst (Ge 3) (fromList $ map Max [6,4,2]) Just (Max 6) λ> findFirst (Ge 7) (fromList $ map Max [2,4,6]) Nothing
Semilattices and queries can easily be combined into tuples. For a tree of pairs, and queries of pairs, you could use.
instance (Semilattice a, Semilattice b) => Semilattice (a,b) where meet (a,b) (c,d) = (meet a c, meet b d) instance (Satisfy a b, Satisfy c d) => Satisfy (a,c) (b,d) where satisfy (a,c) (b,d) = satisfy a b && satisfy c d
Now we can not only questions like "What is the first/last/smallest element that is greater than some given query?". But also "What is the first/last/smallest element greater than a given query that also satisfies some other property?".
It's nice that we now have a search tree that always gives correct answers. But is it also efficient?
As hinted in the introduction, that is not always the case. First of all, meet could give a really bad bound. For example, if meet a b = Bottom for all a /= b, and Bottom satisfies everything, then we really can do no better than a brute force search.
On the other hand, suppose that meet gives 'perfect' information, like the Ge example above,
satisfy q (meet a b) ==> satisfy q a || satisfy q b
That is equivalent to saying that
not (satisfy q a) && not (satisfy q b) ==> not (satisfy q (meet a b))
Then for any Branch, we only have to search either the left or the right subtree. Because, if a subtree doesn't contain the value, we know can see so from the bound. For a balanced tree, that means the search takes O(log n) time.
Another efficient case is when the items are sorted. By that I mean that, if an item satisfies the query, then all items after it also satisfy that query. We actually need something slightly more restrictive: namely that if a query is satisfied for the meet of some items, then all items after them also satisfy the query. In terms of code:
let st = fromList (xs1 ++ xs2 ++ xs3) satisfy q (meet xs2) ==> all (satisfy q) xs3
Now suppose that we are searching a tree of the form st = mkBranch a b with findFirst q. Then there are three cases:
In the first case the search fails, and we are done. In the second case, we only have to search b, which by induction can be done efficiently. The third case is not so clear. In fact, there are two sub cases:
In case 3a we found something in the left branch. Since we are only interested in the first result, that means we are done. In case 3b, we get to use the fact that the items are sorted. Since we have satisfy q (bound a), that means that all items in b will satisfy the query. So when searching b, in all cases we take the left branch.
Overall, the search time will be at most twice the depth of the tree, which is O(log n).
The really cool thing is that we can combine the two conditions. If satisfy can be written as
satisfy q a == satisfy1 q a && satisfy2 q a
where satisfy1 has exact bounds, and the tree is sorted for satisfy2, then queries still take O(log n) time.
Finally, here is an example that makes use of efficient searching with the two conditions. I make use of the Semilattice and Satisfy instances for pairs which I defined above.
treeOfPresidents :: SearchTree (Max Int, Max String) treeOfPresidents = fromList [ (Max year, Max name) | (year,name) <- usPresidents ] where usPresidents = [(1789,"George Washington") ,(1797,"John Adams") ,(1801,"Thomas Jefferson") -- etc
The tree is ordered by year of election, and the Max semilattice gives tight bounds for names. So we can efficiently search for the first US presidents elected after 1850 who's name comes starts with a letter after "P":
λ> findFirst (Ge 1850,Ge "P") treeOfPresidents Just (Max 1869,Max "Ulysses S. Grant")
And with the following query type we can search on just one of the elements of the tuple. Note that we need the type parameter in Any because of the functional dependency in the Satisfy class.
data Any s = Any instance Semilattice s => Satisfy (Any s) s where satisfy _ _ = True
λ> findFirst (Ge 1911,Any) treeOfPresidents Just (Max 1913,Max "Woodrow Wilson")
Comments
Perhaps worth mentioning that there are methods to apply some useful ordering even in multi-dimensional indexes in order to minimise the branching factor of the search. Space-filling curves in particular. There are generalisations of the Hilbert curve that are particularly effective at keeping spacially-nearby items nearby in the linear sequence, leading to e.g. the Hilbert R-Tree.
R-Trees are basically multiway trees (B tree, B+ tree etc) but adapted to multi-dimensional indexing. That usually means no strict ordering of keys, but Hilbert R-Trees do have a strict order.
I don't really like fundep, but you can get around the need for them in this case by using associated types --- Satisfy then looks like:
with, for example,
Note that this removes the inheritance from SemiLattice (although do you need it in the first place?).
Steve: That's an interesting thought. You could probably store a two dimensional R-tree in a SearchTree if the elements are (min_x,max_x,min_y,max_y). Then the question is just what the optimal ordering would be. But can you actually get any worst-case guarantees for multidimensional indexes?
Simon: The fundep is not strictly necessary, but without it all the examples like
would require type annotations. I think your associated type solution is exactly equivalent. I believe you could add the SemiLattice superclass as well:
But I haven't checked whether ghc actually accepts this.
The reason that you 'need' the SemiLattice superclass is because of the law that
You can't state that without meet.
You *can* get some worst-case guarantees for multi-dimensional indexes, but I don't know that much about it. Your (min_x, max_x, min_y, max_y) *is* a kind of space-filling curve if you think in terms of appending all the bits to make a single large binary number, but it's a very bad one. By choosing a precedence-ordering of the 4 ordinates, you ensure that even small changes in the highest precedence chunk cause large jumps, which means searches tend to have a large branching factor to cope with that. The Hilbert curve approach is similar (possibly identical - I'm not clear on this) to interleaving all the bits and Gray-coding the result, so that small changes in any of those four values *usually* give small changes in the combined value.
The R* tree is IIRC provably optimal for certain requirements, which I don't remember. This doesn't impose a linear ordering at all - instead, it uses some heuristic to decide where in the tree each new item should be inserted. As usual, "optimal" is in some asymptotic and possibly amortized sense, and a simpler approach may be more efficient for smaller-scale problems.
Reply