Radix trees are one of my favorite data structures, and widely under used. For many "dictionary" type lookups, they can be faster and more efficient than hash tables. While hash tables are commonly described as being O(1) for lookup, this ignores the need to first hash the input, which is typically an O(K) operation, where K is the length of the input string. Radix trees do lookups in O(K), without needing to first hash and have much better cache locality. They also preserve ordering allowing you to do ordered scans, get min/max values, scan by shared prefix, and more.
If you combine them with an immutable approach (such as https://github.com/hashicorp/go-immutable-radix), you can support more advanced concurrency, such as lock free reads. This is important for highly concurrent systems to allow scalable reads against shared memory.
Radix trees are one of my favorite data structures, and widely under used. For many "dictionary" type lookups, they can be faster and more efficient than hash tables. While hash tables are commonly described as being O(1) for lookup, this ignores the need to first hash the input, which is typically an O(K) operation, where K is the length of the input string. Radix trees do lookups in O(K), without needing to first hash and have much better cache locality. They also preserve ordering allowing you to do ordered scans, get min/max values, scan by shared prefix, and more.
If you combine them with an immutable approach (such as https://github.com/hashicorp/go-immutable-radix), you can support more advanced concurrency, such as lock free reads. This is important for highly concurrent systems to allow scalable reads against shared memory.