in high scale stateless app services this approach is typically used to lower tail latency. two identical service instances will be sent the same request and whichever one returns faster “wins” which protects you from a bad instance or even one which happens to be heavily loaded.
I'm not sure I follow. In this instance we're talking about multiple backend matching engines... Correct? By definition they must be kept in sync, or at least have total omnipotent knowledge about the state of all other backend book states.
I wasn’t in C++ style land but my recollection is that distilled experience would be backed up by extensive mailing list discussions. in case of contention the discussion might extend into case studies or other quantitative techniques atop google3. It’s difficult for me personally to describe the impact (outsized)of a super-resourced monorepo for this kind of thing. also as gp mentioned, it was sometimes possible to automate changes to comply with updated guidelines.
My understanding is that the hardware is always installed but the dealer will not fill the liquid reservoirs unless the customer specifically requests (and pays) for it.
reply