Hacker Newsnew | past | comments | ask | show | jobs | submit | hbz's commentslogin

Seemed to freeze up after I loaded a sample and clicked a few things inside it


I just got it running in the last couple weeks - still doing some QA to find issues, but in general they have been easy to resolve.


Runs quite nice in Safari. I had no troubles loading the samples. Impressive technical feat.


Designing software related to healthcare / electronic medical records requires more thoughtfulness than other kinds of development.


And probably less than some others.


Self hosted ELK stack, not HA at the moment. Will move the ES nodes to AWS's managed service once I'm ready to make it more resilient.


I like all the guesses on which company “Scaling R Us” represents. My guess is RightScale.


A counter to the author's "webserver in 1 line of code" - https://gist.github.com/denji/12b3a568f092ab951456#simple-go...

I prefer proxying of SSL (and automatic generations of LetsEncrypt certificates) using containers so that my web servers don't have to worry about that aspect of configuration.


One of the biggest complaints about the curl - bash paradigm are the security implications. URLs can point to different content at different times. Project maintainers can (and have) changed the content at these URLs for malicious or other reasons. A lot of people will not examine the source of what they're piping into the shell.

To me, https://github.com/jbenet/hashpipe addresses a lot of these issues by pinning the content to a hash.

You can't force somebody to read and understand the install script, but at least those who do can know it's the one they verified in advance.



There's also the fact that it's easy to display something in the browser other than what is pasted into the terminal[1]. That said, if you say "just run apt-get install foo" people will just copy that instead of the curl command.

1: http://thejh.net/misc/website-terminal-copy-paste


Kubernetes uses this way, so why not?


Because it's lazy and insecure?


Does anybody know what program was used to generate the gifs?


Licecap and visualize


Thanks!


An interactive guide to fork bombing



I use 2 containers to accomplish this:

https://github.com/jwilder/nginx-proxy

https://github.com/JrCs/docker-letsencrypt-nginx-proxy-compa...

Since both are long running processes, it makes sense to keep them in separate containers.


I had various issues with that combo, and also didn't like the repetition of VIRTUAL_HOST and LETSENCRYPT_HOST, so I combined the docker-gen and letsencrypt portion[1]. Still BYO nginx.

I also wrote a tool for deploying and managing static servers and application servers during development[2][3], but it's not ready for a Show HN yet. But the idea is ...

    b3cmd --project foo static-scaffold
    b3cmd --project foo static-put public/ /
... and you'll have a docker-compose project accessible at "foo--master.example.com" and a static server at "foo--master--static.example.com", all HTTPS ready.

[1]: https://github.com/mikew/docker-gen-letsencrypt

[2]: https://github.com/mikew/b3cmd

[3]: https://github.com/mikew/b3cmd-server


I just got exactly the same setup working this past week, after deciding to move from ad-hoc setup to a clean Dockerized setup with minimal custom config. It's great!

The thing I like about this setup is that it seamlessly supports multiple vhosts, including SNI for the SSL. I can just create a new container that serves a new domain and set a few environment variables (VIRTUAL_HOST=mydomain.com and LETSENCRYPT_HOST=mydomain.com) and within a few seconds there's a new cert and all requests on that domain are proxied appropriately.

My setup script is here [1] for anyone who wants to do the same.

[1] https://github.com/cfallin/dot/blob/master/doc/setup-grey.c1...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: