So you buy exact same generation of Intel and AMD chips to your developers than your servers and your cutomsers? And encode this requirement into your development process for the future?
No? That would be ridiculous. You’re inventing dumb scenarios to make your argument work.
It’s more like: some organizations buy many of the same model of server, make one or two of them their build machines, and use the rest as production. So it’d be totally fine to use march=native there.
You just wouldn’t use those binaries anywhere else. Devs would simply do their own build locally (why does everyone act like this is impossible?) and use that. And obviously you don’t ship these binaries to customers… but, why are we suddenly talking about client software here? There’s a whole universe of software that exists to be a service and not a distributed binary, we’re clearly talking about that. Said software is typically distributed as source, if it’s distributed at all.
There’s a thousand different use cases for compiling software. Running locally, shipping binaries to users, HPC clusters, SaaS running on your own hardware… hell, maybe you’re running an HFT system and you need every microsecond of latency you can get. Do you really think there are no situations ever where -march=native is appropriate? That’s the claim we’re debunking, the idea that "-march=native is always always a mistake". It’s ridiculous.