I prefer proxying of SSL (and automatic generations of LetsEncrypt certificates) using containers so that my web servers don't have to worry about that aspect of configuration.
One of the biggest complaints about the curl - bash paradigm are the security implications. URLs can point to different content at different times. Project maintainers can (and have) changed the content at these URLs for malicious or other reasons. A lot of people will not examine the source of what they're piping into the shell.
There's also the fact that it's easy to display something in the browser other than what is pasted into the terminal[1]. That said, if you say "just run apt-get install foo" people will just copy that instead of the curl command.
I had various issues with that combo, and also didn't like the repetition of VIRTUAL_HOST and LETSENCRYPT_HOST, so I combined the docker-gen and letsencrypt portion[1]. Still BYO nginx.
I also wrote a tool for deploying and managing static servers and application servers during development[2][3], but it's not ready for a Show HN yet. But the idea is ...
... and you'll have a docker-compose project accessible at "foo--master.example.com" and a static server at "foo--master--static.example.com", all HTTPS ready.
I just got exactly the same setup working this past week, after deciding to move from ad-hoc setup to a clean Dockerized setup with minimal custom config. It's great!
The thing I like about this setup is that it seamlessly supports multiple vhosts, including SNI for the SSL. I can just create a new container that serves a new domain and set a few environment variables (VIRTUAL_HOST=mydomain.com and LETSENCRYPT_HOST=mydomain.com) and within a few seconds there's a new cert and all requests on that domain are proxied appropriately.
My setup script is here [1] for anyone who wants to do the same.