Unfortunately, none of our attempts to sort k's by such measures worked well.
I believe that you did not try an efficient implementation yet.
Theory (aka computational complexity, counting iterations being a simplified version of) and reality (measuring speed) are quite different for "small" factoring algorithms.
After all, this is the only reason why Lehman or Hart algorithms have a right to exist. Otherwise NFS would wipe out all other algorithms at any number scale.
If you try to optimize Hart/Lehman for small n, say n<=60 bit, then you will realize that instructions that are very fast on that number scale have a big or even decisive impact on the general performance. Complexity theory is often not strong enough to capture such influences. Storing the sqrts of k or of km is such a point. Without it, you will not be able to get a really fast Hart/Lehman implementation. Using congruences is a theoretical improvement, but also necessary.
You can disprove my assumption that you did not try an efficient implementation yet by posting a fast piece of software ;)
