It's legal ageism. Asking questions whose answers will be fresh in the minds of recent CS graduates is a way to legally screen out everyone else, no matter how irrelevant the Big-O runtime of merge sort is to the job.
Personally, I always have to wonder how often someone programs a computer if they don't know the asymptotic complexity of a merge sort. It seems like something someone would either pick up at some point in their career, or be able to figure out on their own at the interview.
(Come to think of it, I never took an algorithms class in school, and I know the answer to this question.)
What's your definition of "programming a computer"? Personally, I always have to wonder how often someone programs a computer if they don't know how a four-state branch predictor helps their CPU.
The key insight that relates to the analysis of merge sort is that solving problems with computers always goes better if you divide them in half a few times. This comes up again and again; binary trees, binary search, and all the n log n sorts. Branch prediction is a little different; it's an optimization engineered into modern processors. Splitting problems into two is a fundamental property of the Universe. Therefore, it's less excusable to not realize that you do that a lot.
I concede that your example is more applicable. Perhaps something relating to L1/L2/L3 caches and memory might have been more suitable as mine.
I'm still not fully onboard with your general claim though. Someone building the most brilliantly simple, computationally simple Android calendar/planning/todo application might not care about dividing in half when all they have to do is Collections.sort(). Someone building a CRUD web app might see their most significant performance issue in HTTP throughput or latency. You'll be wondering how often they program a computer.