Yes, thanks, I had this in mind, but it was initially a test: I understand the performance compromise. look forward to get back to a new algorithm with less values. Moreover, it is not suggested to have so many lookup values for unique mapping, should use segmented mapping instead.
This cannot be the algorithm going forward, just wondering why such an issue came up with the current ample of values.
The only thing to add is that, I have multiple copies of the same DB using this algorithm when masked.