Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, there is actually a latency gain to be had from parallellizing indidvidual requests, rather than just running several requests in parallel.

Think about this example: You have 10 printers each capable of printing 10 pages per minute. Then 10 jobs are submitted each with 10 pages. If you run those jobs in parallel, all of them will finish after 60 seconds. If you parallelize each job and print page 1 on printer 1, page 2 on printer 2 etc., then the first job will finish in 6 seconds, the second in 12, and the last one in 60 seconds. The average latency is then (6 s + 12 s + ... + 60 s) / 10 = 33 s.

Your throughput will be the same, except for a bit of parallelization overhead.



Usually CPU latency for a web request is very tiny compared to network/DB/IO latency, though.


Sure, but I/O can be parallelized too. The argument is not specific to CPU at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: