It's really far beyond what we need. We never hit over 5% cpu unless we break an index or do something dumb. But it's great that we can run just a few instances and in-memory cache anything coming back from slow services. We've got a lot of dog slow apps buried in the backend, and running minimal instances makes our hit rate on in-app caches nice enough that we can avoid complexity and slowness of shared cache.
Postgres is extremely fast when decently tuned, probably not much slower than Redis(both IO bound). For an ORM the same can be said of Hibernate. And Vert.X is just stupidly fast in general, just need to be careful not to block event loop or use Fibers with Quasar. There's a lot of tweaks for Jackson serialization speed and Hibernate + Postgres performance that are off by default as well.
Do we need all this speed? Absolutely not. But caching is one of the most evil things in comp sci and having so much excess capacity allowed us to simplify a lot of things For instance, we cache virtually nothing coming from the DB, only slow backends and other third party services. Hibernate also caches some reads for us, and since we don't have many instances the hit rate is fine on that as well.
We probably don't need to worry about performance ever for the future of our app. If the stack wasn't so fast we would need a lot of front end servers and a (slow) shared cache would be needed. Kinda a snowball effect of complexity when you try to retrofit something to be fast down the road
Postgres is extremely fast when decently tuned, probably not much slower than Redis(both IO bound). For an ORM the same can be said of Hibernate. And Vert.X is just stupidly fast in general, just need to be careful not to block event loop or use Fibers with Quasar. There's a lot of tweaks for Jackson serialization speed and Hibernate + Postgres performance that are off by default as well.
Do we need all this speed? Absolutely not. But caching is one of the most evil things in comp sci and having so much excess capacity allowed us to simplify a lot of things For instance, we cache virtually nothing coming from the DB, only slow backends and other third party services. Hibernate also caches some reads for us, and since we don't have many instances the hit rate is fine on that as well.
We probably don't need to worry about performance ever for the future of our app. If the stack wasn't so fast we would need a lot of front end servers and a (slow) shared cache would be needed. Kinda a snowball effect of complexity when you try to retrofit something to be fast down the road