No surprise there - linear search is O(n), binary search is O(log n) and hash table is O(1), what you see with the differences for certain number of records are the different constant factors - building a hash has a certain cost which is why TDict is slower up to a certain point - this depends on the hash algorithm being used - indirections being caused by extra function calls like IEqualityComparer.GetHashCode and the memory layout of the hash table. You can even see that it's not strictly O(1) as it gets a little slower the more records you have which I would guess is due to memory layout within the hashtable where at a certain point memory access time kicks in because the data does not fit in the fast cache anymore (access pattern of the benchmarking code matters there as well).
Without specifying the exact use cases it's hard to give any suggestions - is the data one time fill and then only lookup, is data being changed after - adding/removing records, is data within the records changing. Whats the typical lookup pattern - is data being looked up randomly or in certain patterns (such as ID or Name sequences in certain order).
If you aim for maximum performance a custom data structure and algorithm will always win over some standard algorithm by a few percent - the question is if you want to do all that work including risking bugs for that negligible few percent or use some standard data structures and algorithms that are proven to work and easy to be applicable.