For Keyless SSL, it is necessary to make RSA operations asynchronous, since the operations are requested over the TCP network (which may have big delays).
OTOH Neverbleed degelates the operations within the same server using Unix sockets. So there is no fear of such delays. And the server spawn a dedicated thread to each client thread. In other words, the delay is practically _no worse_ than what it is without Neverbleed.
And discussing _how worse_ it is, calculations related to TLS handshakes may block the server for a few milliseconds. It may sound bad, but generally speaking it is negligible comparing to the latency over a public network.
The point is that it requires TLS handshakes to be done in a multi-threaded system for a server handling high concurrency.
Many servers are multi-threaded, but many are not. Using the proposed technique in a Node.js process, or nginx, is going to severely limit the number of new connections per second.
You seem to have confusion between TLS handshakes and RSA operations.
In OpenSSL (which is used by many servers including node.js, nginx), RSA operation is always synchronous. Therefore, using Neverbleed does not impose new limits regarding concurrency.
It is true that RSA operation over IPC is slower than doing it internally. But the latter is by magnitudes faster than the former, therefore the slowdown is negligible in practice.
You can find the numbers in the FAQ section of the linked website.
Libh2o (the protocol implementation) support both libuv and evloop (our tailor-made event loop).
The default event loop of libh2o is libuv, since libuv is popular and has bindings to other protocols (which you would need if you want to implement an application using libh2o).
OTOH the standalone server uses evloop for performance.
I doubt if anybody is using the mruby handler in production, considering the fact that functions necessary to modify the requests in the handler was committed this week :-)
In case of the benchmark, I run the server as a VM instance on the client machine so that the results would be reproducible (with network latency added by `tc qdisc` command).
PS. PageSpeed is a really nice service; and I agree that it is a useful tool for evaluating the speed in the real-world.
As described in note 5, "the network chart of Chrome includes a 0.2 second block before initianting the TCP connection, which has been subtracted from the numbers written in this blog text". That explains the differences of the numbers on the charts you pointed out.
Might be worth re-running the tests without the block so the graphs can line up better and we don't need to subtract the 200ms? As you well know, implications of latency or packet loss are hard to extrapolate on real-world rendering.
Unfortunately the block was always found on the test machine. The test was conducted using a VM running running on a host on which the web browsers were run, so there were no noise or packet loss; the results were reproducible.
While the blogpost is interesting, I am skeptical of the author's claim that the recovered private key may be used for decrypting user data transmitted over the wire, since private keys cannot be used for encrypting data sent to somebody else.
What it can all do by itself is to decrypt data sent from others, or to digitally sign some data.
I would suspect that the bundled private key was used for digitally signing data to show that it was actually generated by the software. The approach is not perfect (since the private key may get decrypted as the author did), but in general it would work effectively for kicking out third party software.
If the developer's intention was to encrypt the data transferred through the public network, then he/she should have used TLS with server-side authentication, with optionally using clear-text credentials transmitted over the encrypted channel to authenticate the software (e.g. basic authentication over HTTPS).
If it gets proved that private information could be decrypted from data transmitted over the public network by using the recovered private key, then this would be an interesting case of misusing public-key cryptography.
It is true that the software is used for MITM. It is true that _Superfish_ is in the middle, decrypting the communication.
OTOH the author claimed that it might be likely for _others_ as well to possibly MITM the communication, by using the recovered key. My comment is that such a situation is unlikely under the premise that the public-key encryption technology was used correctly (from technical standpoint, not ethical).
EDIT: Even if it was the case that the recovered private key was used by the MITM server running locally for communicating with the web browsers, it wouldn't mean that others could use the key to decrypt data transmitted over the wire by using the key, since all the communication encrypted by the key would terminate within the local machine.
EDIT2: Ah sorry, now I understand. The root certificate installed by the adware was using the recovered private key. That would mean that others can MITM the communication by DNS spoofing, etc. together with a server certificate signed with the recoverd key.
HTTP and HTTPS (both version 1 and 2) are supported for downstream connections (i.e. connection bet. H2O and web browsers).
Only plain-text HTTP/1 is supported for upstream connections (connection bet. H2O and web application servers).
Don't be afraid to not add features. I'd love to see something like this stay as a very high speed server, especially with the move towards single-page static apps that connect to restful services.
Can I than use H2O to connect to a web-browser via HTTPS and H2O is routing the same request upstream via HTTP to a web application server? (that of course would suffice for me).
Is it really the same (really asking)?? Sometimes there are subtle protocol changes, when something is standardized -- and as much I understand, HTTP/2 took a while ...
the initial http 2.0 draft was just a copy of spdy (in nov 2012). There have been changes to 2.0, so they aren't exactly the same.. but http 2.0 is meant to completely replace spdy.
Any of the servers/clients that support spdy currently will eventually make the minor changes, and call it http 2.0.
For Keyless SSL, it is necessary to make RSA operations asynchronous, since the operations are requested over the TCP network (which may have big delays).
OTOH Neverbleed degelates the operations within the same server using Unix sockets. So there is no fear of such delays. And the server spawn a dedicated thread to each client thread. In other words, the delay is practically _no worse_ than what it is without Neverbleed.
And discussing _how worse_ it is, calculations related to TLS handshakes may block the server for a few milliseconds. It may sound bad, but generally speaking it is negligible comparing to the latency over a public network.