Hacker Newsnew | past | comments | ask | show | jobs | submit | moondev's commentslogin

microceph is pretty nice and straightforward for throwaway s3 endpoints

https://canonical-microceph.readthedocs-hosted.com/stable/tu...


Has anyone that has set up microceph determined the overhead of the required multiple OSDs? The docs make it sound scary, but it's not clear if that's because people run it on a Pi with an sdcard for block storage or because someone once ran 18TB of OSDs in production that then fell over.


I do continue to be impressed/ over-awed by how effectively scared the Ceph docs are about just how many system resources you need. To run a mid tier not that fast storage cluster. Bother.

Impressive as hell software and I am so glad to have it. But man! The insistence on mountains of ram per TB, on massive IO is intimidating.


but why would you ever want to run ceph? its just such a huge monster.

Its also not that useful even if you have enough machines to run it properly.

NVME and zfs is fast enough for virtually anything now. With snapshot and snapshot sending you get decent backups for half the hardware cost of ceph.


How is changing the architecture of a platform that only you make hardware for doing the impossible?

They could change the architecture again tonight, and start releasing new machines with it. The users will adopt because there is literally no other choice.

Every machine they release will be fastest and most capable on the platform, because there is no other option


The hard part is doing so without completely ruining the existing app ecosystem. Rosetta 2 is genuinely impressive.


Exactly this! Rosetta + the whole app developer community who really quickly released builds for M chips (voluntary or forced, but it did happen).

I had the initial m1 air, and it was remarkable how useable it was. You'd expect all sorts of friction and issue but mostly things just worked (very fast). Even with some Rosetta overhead it was still fast compared to intel macs.


Rosetta 1 delivered 50-80% of the performance of native, during the PPC->Intel transition. It turns out, you can deliver not particularly impressive performance and still not ruin your app ecosystem, because developers have to either update to target your new platform, or leave your platform entirely.

You can also voluntarily cut off huge chunks of your own app ecosystem intentionally, by giving up 32bit support and requiring everything to be 64bit capable.

...because users have no other choice when only one vendor controls the both the hardware+software. They can either use the apps still available to them, or they can leave. And the cost of leaving for users is a lot higher.


Vs. FEX and Prism?


Yes. Apple put custom hardware support in the M series chips based on the needs of Rosetta 2. The x86_64 performance on Rosetta 2 was often higher at launch than the prior generation of Intel chips running those same binaries natively.

Microsoft and Qualcomm already knew the performance of x86 app emulation on windows was killing the ARM machine lineup, so Qualcomm was working on extensions to their chips and Microsoft on having Windows support them already, but ARM64EC and Prism didn't launch for two years after the M1 shipped.


FEX uses TSO on M series chips.


An artificial limit on the number vms you are allowed to launch doesn't make it solid


macOS* VMs. And if you don’t care about that, is it no longer solid?


Being unexpectedly unemployed also starts a virtual timer of sorts not on your terms. Regardless of how you feel about the event, the longer it persists is universally seen as a negative signal to those that would hire you for your next role. It gets exponentially worse as time goes on making it even harder to find a job, because of the increased time you don't have a job.


Fun leaving to deal with a health issue that starts that timer before you can even get to the prepping and interviewing needed to land a job


I'm currently in that spiral. It is not pleasant knowing every month makes it harder to get back in


Imagine buying a mac studio with 500+ GB of memory and being limited to 2 vms.


Yeah that is what I was going to do until I discovered the two VM limit. I was building a MacOS GitHub Actions farm, or rather, looking into it. I had written most of the code but my inertia screeched to a halt when I discovered the two VM limit for MacOS VMs.


You are not Apple's target market, and never will be.

They don't care what you want to do with the hardware you own.


No kidding.


You realise you can run VMs for any other os right? It's a limit on running macOS not a limit on running VMs.


Yes we all realize that.

It’s MacOS VMs that we want to run.


Maybe I should have used the same dismissive tone.

Imagine thinking everyone who buys a Mac and runs VMs wants to run heaps of macOS VMs.


why else would you buy a mac to run VMs?

arm64 hardware is cheap, x64 hardware is cheap and both of those can run as many Linux or Windows VMs as you have RAM to run.


For me? Infrastructure simulation.

Why buy a second extra machine to do testing of multi machine infrastructure configurations when my workstation can run the VMs locally?

For others? I don't know that's why I think it's ridiculous to assume everyone else's use case is the same as your own.


They discontinued the 512GB Studio, and the Pro is gone, so no fear there now.


They still EXIST though. And I saw one the other day on the Refurbished store. They’re definitely still around.

Even a 256GB model would run a load of 16GB VMs


The VM limit only applies to the number of macOS VMs launched from macOS itself.

My 2018 mac mini officially supports VMware ESXi to be installed directly on the hardware and virtualize any number of macOS machines

Funny enough I can even launch more than 2 macOS vms on my framework chromebook with qemu + KVM from the integrated Linux terminal.


macOS is proprietary software. You need a license for every copy you run, whether it's in a VM or not. The VM limit is written into the macOS EULA.

> to install, use and run up to two (2) additional copies or instances of the Apple Software, or any prior macOS or OS X operating system software or subsequent release of the Apple Software, within virtual operating system environments on each Apple-branded computer you own or control that is already running the Apple Software, for purposes of: (a) software development; (b) testing during software development; (c) using macOS Server; or (d) personal, non-commercial use.


This implies anyone doing this using VMware violates the EULA?


Yes. Apple's not going to come after you for running too many VMs on your personal machine, but if you're running a commercial enterprise involving macOS VMs they do care.


VMware vSphere is not a product intended to be used by consumers. It's intended to be run by enterprises at scale. ESXi is running the vms not macOS.

https://i0.wp.com/williamlam.com/wp-content/uploads/2020/04/...


Yes. And the license only allows you to run macOS guests on macOS hosts. So using esxi means you don’t have any license for whatever macOS guests you run.


You are confusing macos guests on KVM (Linux) and macos guests on ESXi which is a real enterprise product, and officially enables you to run as many macos vms as your hardware supports.


Things like this remind me how much I love open source software. Choice is amazing shout out to all the contributors!


> I'm feeling like it is hard to find a simple GUI to just review a system and manage a bunch of containers and VMs.

Incus does all three through the same web ui

* OCI compatible "app" containers - with support for registries like docker.io and ghcr.io

* LXC "system" containers

* virtual machines with qemu + kvm


This is a built in UI? How do I access that?

Edit: so, this is the incus-ui-canonical package? It feels a bit ironic that canonical ships this, because I thought the whole point of incus was to avoid canonical and the direction they were taking lxd.

Thank you for this, I'll check it out.


Yes that is the package. It's just like the canonical ui for lxd, but it also supports the incus enhancements like OCI containers.

Very handy to generate yaml config for machines and viewing their console / terminal.


That's why I love Incus. It offers all three so you don't have to choose. OCI app containers, LXC containers and KVM.


Just like KIND runs containerd inside docker, you can also run dockerd inside containerd backed pods.

Start a privileged pod with the dind image, copy or mount your compose.yaml inside and you should be able to docker compose up and down, all without mounting a socket (that won't exist anyway on containerd CRI nodes)

To go even further, kubevirt runs on kind, launch a VM with your compose file passed in via cloud-init.


this is OpenClaw's docker compose yml - https://github.com/openclaw/openclaw/blob/main/docker-compos... . Arguably the hottest thing in the world right now. Mac Minis are out of stock because of this.

At no point, have I invented a new/better method. Perhaps your way is better.

I just recognise that Docker Compose is loved by most open source developers. And invariably any project you touch will have a docker compose setup by default. And it isnt going away, no matter hard anyone tries to kill. Some things are just too well designed. Docker Compose is one of those things.

I'm just making it possible to run those on kubernetes seamlessly.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: