Instead of fixing the hashmap bucket array to 127 entries dynamically
size it, starting with a smaller one of 31. As soon as a fill level of
75% is reached, quadruple the size, and so on.
This should siginficantly optimize the lookup time in large tables
(from O(n) back to O(1)), and save memory on smaller tables (which most
are).