Peter is a DZone MVB and is not an employee of DZone and has posted 161 posts at DZone. You can read more from them at their website. View Full User Profile

Unique hashCodes is Not Enough to Avoid Collisions

10.15.2013
| 4622 views |
  • submit to reddit

There is a common misconception that if you have unique hashCode() you won't have collisions.  While unique, or almost unique, hashCodes are good, this is not the end of the story.

The problem is that the size of a HashMap is not unlimited (or at least 2^32 in size)  This means the hashCode() number has to be reduced to a smaller number of bits.

The way HashMap, and thus HashSet and LinkedHashMap ,work is to mutate the bits in the following manner

    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);


and then apply a mask for the lowest bits to select a bucket.  The problem is that even with unique hashCode()s as Integer does, there will be values with different hash code map to the same bucket. You can research how Integer.hashCode() works ;) 

public static void main(String[] args) {
    Set integers = new HashSet<>();
    for (int i = 0; i <= 400; i++)
        if ((hash(i) & 0x1f) == 0)
            integers.add(i);
    Set integers2 = new HashSet<>();
    for (int i = 400; i >= 0; i--)
        if ((hash(i) & 0x1f) == 0)
            integers2.add(i);
    System.out.println(integers);
    System.out.println(integers2);
}
static int hash(int h) {
    // This function ensures that hashCodes that differ only by
    // constant multiples at each bit position have a bounded
    // number of collisions (approximately 8 at default load factor).
    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);
}
this prints
[373, 343, 305, 275, 239, 205, 171, 137, 102, 68, 34, 0]
[0, 34, 68, 102, 137, 171, 205, 239, 275, 305, 343, 373]

The entries as in the reverse order they were added as the HashMap is acting as a linked list, placing all entries into the same bucket.

Solutions?

A simple solution is to have a bucket turn into a tree instead of a linked list.  In Java 8, it will do this for String keys, but this could be done for all Comparable types AFAIK.Another approach is to allow custom hashing strategies to allow the developer to avoid such problems, or to randomize the mutation on a per collection basis, amortizing the cost to the application.

Other notes

I would favour supporting 64-bit hash codes, esp for complex objects.  This has a very low chance of collision in the hash code itself and supports very large data structures well. e.g. into the billions.
Published at DZone with permission of Peter Lawrey, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Vitalii Tymchyshyn replied on Fri, 2013/10/18 - 7:49am

Well, I don't think 64bit hashcode would help here as hashmap would still have same number of buckets. 

Also I don't want HashMap to start calling custom compareTo code out of sudden. This can lead to many problems, starting from performance ones.

Interesting option would be to make btree with full hash codes to values. This would help search a lot, while not adding more calculations.

Another option is second level of buskets with different hash function.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.