06
May 08

Implementing HAT-Tries in Java

As I said in an earlier post detailing the CCHashTable, it formed a part of a larger data structure, the HAT-Trie. The HAT-Trie is a recent variant on the trie data structure, published by Askitis and Sinha in last year’s Australasian conference on Computer science (the original paper is here). The problem they were addressing was that while the trie is very fast, it’s also very expansive in memory; and while some variants of it like the radix trie (or Patricia trie) and the related burst-trie, are less expensive in terms of memory space, they aren’t as fast as they could be because they don’t take into account caches in their design.

Consider for a moment, the standard trie node, with its array of pointers to all possible following characters. Think of a string of these nodes, representing a character string of some type, and how the terminal node then stores a link to a piece of information which the overall character string is indexing to (there are obviously a lot of applications for such a data structure). The obvious problem is that the longer the string, the more nodes, and the nodes are large – even for standard ascii, that’s 128 pointers per node, which means that to store ten characters, you’re looking at 512 bytes. And few strings used in such applications are that short – certainly in the application I was looking at originally, we were looking at strings ten times that size at least. The burst-trie, the radix trie (and it’s predecessor the compact trie) all attempt to solve this problem through similar methods involving compressing parts of the trie. The compact trie looks at all the branches of the trie that lead straight to a terminal node and represents it with a single node (in other words, it takes the unique bits of the ending of the strings stored in the trie and compresses those). The radix tree takes this a step further and notes that most strings being indexed have similar chunks throughout the strings, not just at the ends, and it compresses these chunks from multiple trie nodes to one node.

The burst-trie takes a slightly different approach, in that it compresses chunks from the trie by grouping together those strings with the same chunks at the start of the thread, so that the trie works as a partial index, and points to a secondary data structure that stores the rest of the string and it’s associated data. Heinz used linked lists when documenting this data structure, but linked lists take a performance hit when caching is taken into account, because linked lists tend to stump caches as their memory access can’t be predicted reliably, even through things like moving the last accessed element to the front of the list can help.

Enter the HAT-Trie.The idea is simple enough – a burst trie, but using a cache-concious data structure, in this case the earlier mentioned CCHashTable. Implementation is somewhat complicated in that the actual algorithm starts with a single CCHashTable node, which on reaching a set threshold for the number of strings contained (chosen to keep the number of hash table collisions to a minimum), it bursts into a single root HAT-Trie node, and a number of new CCHashTable nodes attached to it. That bit of jiggery-pokery should be handled by a third class, a trie manager of sorts, but that bit of refactoring has yet to be done. For now, there’s a bit of icky code in the CCHashTable class that knows about the HAT-Trie class. Ah, the crappy code we write under deadlines…

The actual HAT-Trie object’s data is simple enough:

public class HATTrieNode {
    private HATTrieNode parent;
    private Object children[];
    private int id;

The reason for the children being Object pointers is that they could be other HATTrieNodes or CCHashTable objects. And id is the piece of information being indexed by the input string. Storing a string is a relatively straightforward recursive function:

public boolean addString(char inputString[], int id) {
    if (inputString.length > 1) {
        char newString[] = new char[inputString.length-1];
        System.arraycopy(inputString, 1, newString, 0, inputString.length-1);
        Object child = this.children[(int)inputString[0]];
        if (child != null) {
            if (child.getClass().getName() == "HATTrieNode") {
                return ((HATTrieNode)child).addString(newString,id);
            } else if (child.getClass().getName() == "CCHashTable") {
                return ((CCHashTable)child).addString(newString,id);
            }
        }
        return false;

which handles the terminal nodes like so:

         
    } else if (inputString.length == 1) {
        char newString[] = new char[0];
        Object child = this.children[(int)inputString[0]];
        if (child != null) {
            if (child.getClass().getName() == "HATTrieNode") {
                return ((HATTrieNode)child).addString(newString,id);
            } else if (child.getClass().getName() == "CCHashTable") {
                return ((CCHashTable)child).addString(newString,id);
            }
        }

And there’s some additional sanity checking boilerplate as well. Checking for a node then just involved running through the trie, again recursively, with code very similar to the above. The bursting function is the icky part – it requires the CCHashTable to know how to burst itself and stitch the parts back together using a new HAT-Trie node, which means that the CCHashTable and HATTrieNode classes are very, very tightly coupled. Which is a big OO no-no, and is in dire need of refactoring. The basic idea is to create a new HAT-Trie node, tie it to the parent node of the CCHashTable being burst, then burst the CCHashTable and link all the new CCHashTables created to the HAT-Trie node, like so:

public HATTrieNode burst() {
    HATTrieNode originalParent = this.parent;
    HATTrieNode newNode = new HATTrieNode(originalParent);
    StringBuffer tmp = new StringBuffer(8192);
    int j = 0;
    int k = 0;
    int length = 0;
    int id = 0;
    for (int i = 0; i < CCHashTable.tableSize; i++) {
        j=0;
        while (j < this.contents[i].length) {
            if (this.contents[i][j] == '') {
                break;
            }
            // first decode the length of the string
            length = (int) (this.contents[i][j++] | (this.contents[i][j++] << 16));
            for (; this.contents[i][j] != ''; j++) {
                tmp.append(this.contents[i][j]);
            }
            j++;
            id = (int) (this.contents[i][j++] | (this.contents[i][j++] << 16));
            if (newNode.addString(tmp.substring(0),id)) {
                tmp.setLength(0);
                //empty the stringbuffer for re-use
            } else {
                System.out.println("Error while bursting");
                return null;
            }
        }
    }
    return newNode;
}

This isn’t good design or good practise though. In fact, I’m really not happy with the design here at all. However, like the steam-power systems on a nuclear aircraft carrier, it’s a kludge that works.


29
Mar 08

Cache conscious hash tables

So one of the things I was working on as part of DeviceAtlas (but which ultimately didn’t get used) was a cache-conscious hash table in Java. It’s not unique in design – in fact it comes right out of a research paper written for the SPIRE 2005 conference by Nikolas Askitis and Justin Zobel – but the implementation was interesting to me as I’d not done optimisation work in Java in a while, and some things had changed quite a bit since I last wrote Java code. And it was a bit of an ego boost that I got it to outperform java.util.HashMap:

SUN HashMap fill: 57797 us
SUN HashMap query: 165701 us, 0 errors
CCHashTable fill (fair): 23205 us
CCHashTable query (fair): 35513 us, 0 errors
CCHashTable fill: 41723 us
CCHashTable query: 43055 us, 0 errors

Of course, there are the minor criticisms that it’s nowhere near as general-purpose as the HashMap class and that HashMap is arguably exhibiting an intelligent design choice rather than cheating per se, but I like my ego so I’m going to ignore those arguments!

Continue reading →


16
Jan 08

Sun buys MySQL

So Sun has splurged some $800 million in cash and taken up $200 million in options to purchase MySQL AB. It’s somewhat of an odd move really. I mean, Sun’s got a decent reputation for open source stuff (not always linux-friendly, but “open source” does not mean “linux” after all). Java is now GPL’d, openSolaris as well, there’s code finding its way from Sun to Linux at a kernel level, and there are other examples. But MySQL?

About the only thing that comes to mind right now is that we might see a push from Sun in the coming years away from the standard LAMP stack and towards OTMS/STMJ stacks (Opensolaris/Tomcat/MySQL/Servlet or Solaris/Tomcat/MySQL/JSP or some suitable combination) so we’d have a Sun-friendly stack looking for a piece of LAMP’s market share.

It hardly seems logical. PHP versus Servlet/JSP isn’t a sensible comparison. If you look at development time, PHP is so far ahead of JSP/Servlets that it’s not even a competition any more; and likewise if you look at runtimes, PHP is so far behind that it’s just not fair. There’s just no room for a particular stack to get itself chosen over the other if it’s not suitable. But then, that’s what a huge marketing department is for, right?