Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn. To learn alternatives, it takes years, and applications built on alternatives will be tied to one cloud.

See prior discussion here: https://news.ycombinator.com/item?id=23463467

You'd have to learn AWS autoscaling group (proprietary to AWS), Elastic Load Balancer (proprietary to AWS) or HAProxy, Blue-green deployment, or phased rollout, Consul, Systemd, pingdom, Cloudwatch, etc. etc.



Kubernetes uses all those underlying AWS technologies anyway (or at least an equivalently complex thing). You still have to be prepared to diagnose issues with them to effectively administrate Kubernetes.


At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix. Moving providers with a k8s system can be a weeks long project rather than a years long project which can easily make the difference between surviving and closing the doors. It's not a panacea but it at least doesn't make your system dependent on a single provider.


If you can literally pick up and shift to another cloud provider just by moving Kubernetes somewhere else, you are spending mountains of engineering time reinventing a bunch of different wheels.

Are you saying you don't use any of your cloud vendor's supporting services, like CloudWatch, EFS, S3, DynamoDB, Lambda, SQS, SNS?

If you're running on plain EC2 and have any kind of sane build process, moving your compute stuff is the easy part. It's all of the surrounding crap that is a giant pain (the aforementioned services + whatever security policies you have around those).


I use MongoDB instead of DynamoDB, and Kafka instead of SQS. I use S3 (the Google equivalent since I am on their cloud) through Kubernetes abstractions. In some rare cases I use the cloud vendor's supporting services but I build a microservice on top of it. My application runs on Google cloud and yet I use Amazon SES (Simple Email Service) and I do that by running a small microservice on AWS.


Sure, you can use those things. But now you also have to maintain them. It costs time, and time is money. If you don't have the expertise to administrate those things effectively, it may not be a worthwhile investment.

Everyone's situation is different, of course, but there is a reason that cloud providers have these supporting services and there is a reason people use them.


> But now you also have to maintain them.

In my experience it is less work than keeping up with cloud provider's changes [1]. You can stay with a version of Kafka for 10 years if it meets your requirements. When you use a cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence. You are at their mercy. I am not saying it is always better to set up your own equivalent using OSS, but I am saying that makes sense for a lot of things. For example Kafka works well for me, and I wouldn't use Amazon SQS instead, but I do use Amazon SES for emailing.

[1] https://steve-yegge.medium.com/dear-google-cloud-your-deprec...


While in general I agree with your overall argument, when it comes to:

> cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence

AWS S3 and SQS have both gone down significantly in price over the last 10 years and code written 10 years ago still works today with zero changes. I know because I have some code running on a Raspberry Pi today that uses an S3 bucket I created in 2009 and haven't changed since*.

(of course I wasn't using an rPi back then, but I moved the code from one machine to the next over the years)


But "keeping up with changes" applies just as much to Kubernetes, and I would argue it's even more dangerous because an upgrade potentially impacts every service in your cluster.

I build AMIs for most things on EC2. That interface never breaks. There is exactly one service on which provisioning is dependent: S3. All of the code (generally via Docker images), required packages, etc are baked in, and configuration is passed in via user data.

EC2 is what I like to call a "foundational" service. If you're using EC2 and it breaks, you wouldn't have been saved by using EKS or Lambda instead, because those use EC2 somewhere underneath.

Re: services like SQS, we could choose to roll our own but it's not really been an issue for us so far. The only thing we've been "forced" to move on is Lambda, which we use where appropriate. In those cases, the benefits outweigh the drawbacks.


It’s time and knowledge.

It can be simple but first you have to learn it.

Given that life is finite and you want to accomplish some objective with you company (and it’s not training dev ops professionals), it’s quite interesting having the ability to outsource a big part of the problems needed to be solved to get there.

Given this perspective, much better to use managed services. Let’s you focus on the code (and maintenance) specific to your problem.


And don't you have specific yaml for "AWS LB configuration option" and stuff? The concepts in different cloud providers are different. I can't image it's possible to be portable without some jquery-type layer expressing concepts you can use and that are built out of the native concepts. But I'd bet the different browsers were more similar in 2005 than the different cloud providers are in 2021.


Sure, there is configuration that goes into using your cloud provider's "infrastructure primatives". My point is that Kubernetes is often using those anyway, and if you don't understand them you're unprepared to respond in the case that your cloud provider has an issue.

In terms of the effort to deploy something new, for my organization it's low. We have a Terraform module creates the infrastructure, glues the pieces together, tags stuff, makes sure everything is configured uniformly. You specify some basic parameters for your deployment and you're off to the races.

We don't need to add yet more complexity with a Kubernetes-specific cost tracking software, AWS does it for us automatically. We don't have to care about how pods are sized and how those pods might or might not fit on nodes. Autoscaling gives us consistently sized EC2 instances that, in my experience, have never run into issues because we have a bad neighbor. Most importantly of all, I don't have upgrade anxiety because there are a ton of services stacked on one Kubernetes cluster which may suffer issues if an upgrade does not go well.


> At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix.

You're saying that the solution to k8s is complicated and hard to debug is to move to another cloud and hope that fixes it?


> You're saying that the solution to k8s is complicated and hard to debug is to move to another cloud and hope that fixes it?

Not in the slightest. I'm saying that building a platform against k8s let's you migrate between cloud providers because the cloud provider's system might be causing you problems. These problems are probably related to your platform's design and implementation which is causing an impedance mismatch with the cloud provider.

This isn't helpful knowledge when you've only got four months of runway and fixing the platform or migrating from AWS would take six months or a year. It's not like switching a k8s-based system is trivial but it's easier than extracting a bunch of AWS-specific products from your platform.


It takes almost as much time and effort to move K8s as it does to reinvent one cloud implementation as another cloud implementation, and your system engineers still have to learn an entirely new system of IaaS/PaaS. You don't really save anything. The only thing K8s does for you is allow the developers' operation of the system to be the same after it's migrated.


> The only thing K8s does for you is allow the developers' operation of the system to be the same after it's migrated.

I mean, yeah, that’s exactly what’s required to happen, and it’s a good thing because only your system engineers need to do most of the legwork. If you have a team of system engineers, you probably have a much bigger cohort of application engineers.


Indeed. When we did a cloud migration we first moved all our apps to a (hosted) k8s first, and then to a cloud k8s cluster. This made the migration so much easier.


Only the k8s admins need to know that though, not the users of it.


"Only the k8s admins" implies you have a team to manage it.

A lot of things go from not viable to viable if you have the luxury of allocating an entire team to it.


Fair point. But this is where the likes of EKS and GKE come in. It takes away a lot of the pain that comes from managing K8s.


That hasn't been my experience. I use Kubernetes on Google cloud (because they have the best implementation of K8s), and I have never had to learn any Google-proprietary things.


Kubernetes on AWS is always broken somewhere from experience as well.

Oh it's Wednesday, ALB controller has shat itself again!


cloud agnosticism is, in my experience, a red herring. It does not matter and the effort required to move from one cloud to another is still non-trivial.

I like using the primitives the cloud provides, while also having a path to - if needed - run my software on bare metal. This means: VMs, decoupling the logging and monitoring from the cloud svcs (use a good library that can send to cloudwatch for eg. prefer open source solutions when possible), do proper capacity planning (and have the option to automatically scale up if the flood ever comes), etc.


> The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn.

Learning Heroku and starting using it takes maybe an hour. It's more expensive and you won't have as much control as with Kubernetes, but we used it in production for years for fairly big microservice based project without problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: