Hacker Newsnew | past | comments | ask | show | jobs | submit | omcnoe's commentslogin

This really is a great blogpost, I'm really glad you shared it.

Yeah there is some degree of awkwardness created by the interaction, but I think it’s less about needing specific libraries to map well and more about getting a good understanding of what the interop rules are, and what the shape of the underlying generated output actually looks like.

C# interoperability loosens guarantees (particularly immutability) that F# code normally relies on. There are surprising limits that come up in generics because of how they map to C#.


Fable is great but it has a surprising number of these hidden behaviour changes that are really hard to detect when writing code against it.

My understanding is the uutils development process involved extensive testing against the behaviour of the original utilities, including preserving bugs.

But we still have CVE's for trivial things? I mean just a medium sized test suite for "rm" alone should probably be many thousand test cases or so. And you'd think that deleting "." and "./" respectively would be among them? Hindsight is always 20/20 and for inputs involving text input you can never be entirely covered, but still....

If something as basic as "rm ./" is broken, the word "extensive" does not apply to whatever testing there was.

I think there is also the added challenge that ARM macs are a moving target, and Apple has less than no desire to provide any kind of stability or support for Asahi Linux. Unlike the PC space where laptop manufacturers have to maintain broad compatibility over time, Apple will make future changes that are really awkward for Asahi and will not care one bit because they can do the compat work on their own software.

>I think there is also the added challenge that ARM macs are a moving target

Yes, but also no? Because I think a reasonable argument can be made that ARM Macs are like game consoles with a more rapid generation: yes there are changes between each generation, but then you've got millions of units which are good for a very long time that are all near identical. Apple definitely is not changing everything between gens at all, work they've done for M1 has been plenty useful since. And support stretches awhile. The final M3 generation chip only came out about a year ago (the M3 Ultra for the Mac Studio was March 2025).

So sure there's ongoing effort needed for newer systems, and that may require ongoing RE more then typical. I don't want to brush aside the effort there at all. But at the same time there doesn't seem to be the same long tail of hardware variations and dozens to hundreds of players doing their own little tweaks either. Aside from memory and storage, every single Mac of a given SoC is the same so each time one gets covered they all get covered and are a stable experience. It's definitely a different thing then developing for PCs, and I definitely wish there was and support serious legal backing for no rug pulls being allowed, ever. Hardware owners should always have access to the root of trust if they want it. But that aside, I don't think their efforts are wrong or somehow wasted just because each new generation might need some new work. That doesn't appear from the outside to be intractable, and fact is the pace of hardware change for computers has slowed and continues to slow. A system from many years ago can still be very good for most tasks... so long as the OS can still be updated and work. Apple themselves seem more then limiting factor there, whereas Linux shines in long term support.


The huge advantage of x86_64 isn't that it's a stable platform, but that the big hardware vendors maintain their own Linux driver. Nobody needs to reverse engineer how AMD GPUs work, you can just use AMD's driver. Nobody needs to reverse engineer how Intel power states work, just use Intel's pstate driver. Nobody needs to reverse engineer how Broadcom's WiFi driver works, Broadcom maintains their own Linux driver and contributes to the upstream brcm80211 driver. And commodity hardware that's not directly supported by the vendor typically has detailed data sheets, meaning that someone has to contribute a driver but no reverse engineering is needed.

Its more than that, its that x86 vendors know how to maintain hardware backwards compatibility, they don't throw out the entire USB subsystem every time a new phy/whatever shows up because there is a standardized mailbox interface sitting in front of the actual HW. Same with the core platform, which works out of the box using 25+ year old firmware standards that are flexible enough to support simple sensors and behaviors, like lid close notification on a laptop for example across multiple OS's. Even something as simple as the firmware interface for handing off a frame buffer to the OS isn't universally support on arm platforms because a significant fraction don't support uefi. Apple was an early uefi adopter, but whatever internal politics they have, means they tossed even that on the latest mac's.

From an end user perspective, I think the best thing the Asahi team could have done was solely focus on getting the M1 Air/Pro working 100% before moving onto other devices.

But that would probably result in burn out from the crazily talented dev team :P


Asahi focusing on M1 would also encourage secondary market sales of M1 laptops, which are already a primary competitor (see Apple marketing) to current Apple laptops. If Apple wanted to encourage Asahi Linux users to move from M1 or Qualcomm to M5/M6 Apple devices, they could improve device firmware compatibility with Linux, or contribute directly to mainline Linux.

Haha, I can't imagine Apple contributing open source driver code to mainline Linux.

My assumption is that if they ever decided they would provide support for Linux, it would be a private Mac-linux fork.

It's hard to imagine they would go the shim + blob route like nvidia as that would still require upstreaming stuff.

Honestly, they should just document their hardware so we can write our own drivers without hurclean reverse engineering efforts.


Considering that M1 and M2 are almost the same architecturally isn't that exactly what they are doing? M3 are two new contributors who decided they wanted that.

I'm not really sure what it would mean for M1 air/pro to work better at this point to be honest, other than I guess power consumption during sleep but that's supposedly a super tricky problem that can't be "solved", it can just be incrementally improved through immense effort. But the main problems I have on my M1 Pro now are just the normal Linux laptop problems: bad trackpad palm rejection, input latency, inconsistent scroll speed between apps, high latency tap to click, somewhat janky fractional scaling (at least in GNOME). These aren't really problems for Asahi to fix, I feel.

On one hand, yes they're a moving target, on the other, they're a lot more uniform than X86 machines.

X86 can also be a moving target now; with Windows's driver autodiscovery mechanisms, manufacturers that don't care about Linux could still make people's life hell.


> Unlike the PC space where laptop manufacturers have to maintain broad compatibility over time

LOL

If anything Apple is infamous for keeping around hardware blocks for as long as they can. IIRC the serial port driver for everything Apple ARM dates back to the very first generations of iPods.

Of course Apple will remain a moving target, but they are orders of magnitude more stable than everyone else in the non-x86 universe.


How does Ubuntu Linux on recent Qualcomm (ex-Apple Nuvia) Arm laptops compare to Asahi Linux on Apple Silicon?

Most people don't realize that the Asahi team ship features only once they work without quirks. For the set of supported hardware features, Asahi is much closer to a macOS experience than to an average x86 Linux laptop experience.

Meanwhile, Linux on my Lenovo X13s "works" but has tons of quirks: Boot fails 2 out of 3 times, the device hard-resets sometimes when waking up with a display connected, and the speakers are unusable due to lack of active overheat protection (and somehow this affects even external speakers). It technically works, but it's incredibly frustrating to use in practice.

If you plan to use Linux and don't need an ARM laptop, there's little reason to prefer a Qualcomm device over an x86 one currently. On the other hand, M1/M2 easily outperform a broad class of x86 laptops, and they have a Linux experience that's for many use cases close to on par with official vendor support.


Pretty rude to call this ex Apple Nuvia. I don't think any of those lawsuits by Apple or ARM have been won. Qualcomm declares this to be a new chip. But yes it has talent from those places. Still, let's not try to tip the scales of perception quite so indelicately?

I am curious what the boot situation is. It seems like Qualcomm actually has pretty good support for their cores. But since these PC systems sort of lack a bios, each one needing a hand built DeviceTree: it makes supporting them kind of a nightmare. Even a raspberry pi has a much more advanced and accommodating boot environment than these frustrating Qualcomm laptops. Alas. I don't know but I expect Asahi has to do similar hand tailoring. I am curious to know what the boot chain looks like! How much the system willingly helps vs how much hard to be bespoke hand coded system config! (Wish it wasn't like this, it's so bad)


Circular talent economy, https://www.tomshardware.com/pc-components/cpus/legendary-qu...

  Just several months after leaving Qualcomm, distinguished CPU and system architects Gerard Williams, John Bruno, and Ram Srinivasan, who are celebrated for their high-performance processors developed at Apple, Nuvia, and, more recently, Qualcomm, established a new CPU startup — Nuvacore — that promises no less than to 'rewrite the rules of silicon.'

Seems like a good thing, no? People getting paid well to skip around and improve products across the board. A virtuosos cycle, as opposed to the cynical cycle of ruining one project and parachuting to the next.

Also keeps lawyers busy.

Without stirring the pot too much, I’m a bit out of the loop on what the above poster implied and you took slight to. Could you share a little more about this and why you feel what they said was rude?

There's nothing rude about it; the Nuvia CPU core is pretty much the entire selling point of the Snapdragon X Elite product family. Everything else on those chips is underwhelming. But the provenance of the CPU core is really irrelevant to the question of Linux support, which is gated by driver support for the rest of the SoC, which didn't come from Nuvia. So focusing on the Nuvia aspect is a bit of a red herring.

Qualcomm may be a tech company, but they behave more like a pack of lawyers (probably the best in the tech business at extracting money” double dipping”) they will never support Linux in any usable way, not without a huge ongoing fee/payola on their so called partners.

That certainly is what their past looked like!

And in many ways that probably is true. But it's not uninform. There's a lot of places where Qualcomm is clearly working very hard to get upstream, to get mainline support. https://www.phoronix.com/search/Qualcomm

I was super impressed with their work offloading sound to a USB sound card, to let the CPU sleep more. Really wild subsystem to build. And they did it! Kept at it! Really cool stuff to have in the kernel.

They've hired some good people for GPU support, which is rad. I feel like Qualcomm is so so close to having a great system people can genuinely love. But there's always some missing pieces, it's always an end result that is far far far quirkier and more difficult than a PC would be. Some of the other comments in this thread give me some hope that there is a more normal boot chain here at least, that it's other troubles. But it's hard. And Qualcomm only has so much power over what their OEM partners actually build.

Qualcomm is the only name in wifi right now for OpenWRT like systems. MediaTek looks good, is present too, but supposedly their drivers are just a total garbage fire, buggy & crash tastic beyond words.

I think it's important we reassess our old biases. And give some credit where due. Qualcomm has an absolutely forsaken reputation & their lawyerliness is a thing of legend, forbidding as heck. But there are also a lot of signs that at least some of the company is tired of making chips that are utterly unsupportable, and has some real drive towards good open source support. Thank you, warriors of light there.

Really hoping we see some Linux running Snapdragon X2 Elite Extreme units in the next 12 months. Looks like an amazing system! Good job engineering the new cores ya'll!! Amazing performance.


> bit of a red herring

It offers an A/B test of "similar" SoC performance and battery life (which users now expect from laptops), without a vertically integrated operating system that was also created by the company who designed the SoC.


Apple and ARM have sued Qualcomm over the Nuvia talent.

Qualcomm at its heart is a patent troll company. They and Microsoft actually deserve each other. Long-term their partnership will end in tears.

BecauseMicrosoft is at this time in different to the Surface computers they are all in on copilot. It is basically copilot or bust for Microsoft.


> these PC systems sort of lack a bios, each one needing a hand built DeviceTree: it makes supporting them kind of a nightmare.

Modern PC ARM systems like Snapdragon Elite X use UEFI and ACPI. This is actually what makes them difficult, because they're trying to operate in a "new world" while most ARM SOC IP and peripheral drivers work in the "old world."

The issue with ARM has never _really_ been early boot; yes, it's arcane and a pain in the butt on some platforms, but it really only needs to be done once - once your DRAM is trained and running (this is usually the hardest part) and you can load and jump into a kernel, you're set. Hypervisor / security processor driven systems like Qualcomm (and for that matter, Intel and AMD) actually make this even easier at the expense of openness, because the vendor blob usually brought everything up for you already.

The issue has always been hardware discovery and mutable device configuration. When ARM devices were first supported by Linux, they were mostly embedded devices with one configuration, ever. So, they used devicetree, which is a fixed structure for each board, defined before boot and provided by the bootloader.

Because of this, most SOC / platform / IP soft-core drivers were built to work with fixed, proprietary configurations and usually only tested against a single platform to start.

On the other hand, x86 devices have been forced to work as highly mutable, arbitrary combinations of hardware (Plug n Play) with dynamic reconfiguration using ACPI since the start, so the drivers for x86 peripherals have always had to cope with a completely unpredictable environment.

What this means is that there's a ton of effort required to transition ARM _peripheral drivers_ from the "devicetree" world where drivers took fixed arbitrary, proprietary key=value parameters provided by a magic blob at boot to the ACPI world, where everything is dynamic, scripted, and abstract.

I'd actually argue that Pi have the most hacked tooling on top of the "old devicetree way," which means they're the most set on it. Pi peripherals are usually configured at pre-boot time using devicetree overlays and their drivers usually don't support any kind of probing/autodiscovery. As far as I know there's no real plan to change this (and maybe there doesn't have to be; it seems to work for them).

Anyway, this is all to say: I don't think the issue with either system is the "boot situation," it's the "peripheral configuration situation." In this sense, Asahi are actually in a fine situation to use devicetrees, which they do, because basically all of the SOC peripherals are proprietary and there are a fixed number of Apple devices to target and the only external interfaces are existing hot plug standards (USB/Thunderbolt/HDMI/DP). Qualcomm are smart to have started to try to use ACPI, because their SoCs could be hosted on boards with standard peripherals configured in thousands of different ways, like all PCs. But, they're playing on hard mode because most of the existing ARM peripheral drivers weren't made to support this model.


While it's true that early Linux ARM devices where embedded and generally only supported a single configuration, they didn't actually use devicetree.

Originally, embedded Linux ARM devices used a board file with a platform bus and hard-coded device metadata. The bootloader had to pass a machine id which told the kernel which hardware you were running on and which board file to use.

You can see remenants of this in the kernel still, though it's quickly being removed. I'm actually working on a hybrid kernel with the goal of bringing modern Linux support (on an lts branch) to old MSM7x300 devices, like the Evo 4G Shift I intent to use a tmux console/cyberdeck.

On another note, ACPI/UEFI doesn't always give you a clean abstract surface to work with either. ACPI is notorious for building in OS checks into it's compiled bytecode to the point that Linux often lies to it about what OS is running.


I remember that era (and it's still present on some other architectures) - devicetrees were at least a huge improvement over compile-time board config!

> ACPI/UEFI doesn't always give you a clean abstract surface to work with either.

That's putting it lightly. I think the best abstraction would probably land somewhere inside the big gap in the board config headers -> devicetree --------------> ACPI complexity continuum, but I'm not sure it's possible to do that at this point in the game as both sides are so entrenched.

> ACPI is notorious for building in OS checks into it's compiled bytecode to the point that Linux often lies

The problem with ACPI in this dimension is that there's a bidirectional errata game: the bytecode tries to work around the OS and the OS tries to work around the bytecode.

Unfortunately, there was never a real version standard for the Linux firmware interface early on (the _OSI("Linux") debacle), so the only testable versioned ACPI interface is Windows. This means that Linux is basically forced to become a Windows ACPI emulator. I think there are political reasons for this (obviously the 90s and 2000s were a bad time for Linux/Microsoft coexistence) but also just some decisions that look like big engineering mistakes in hindsight - the historic allergy of Linux maintainers to any kind of specified or versioned interface aimed at anything but user land definitely strikes again here.

I think that the versioning/errata issue and the native code trapdoor are the two biggest issues with ACPI (and admittedly both are large enough to drive a bus through); otherwise it's a kind of nasty thing but it fills in nicely for a lot of much nastier ideas and covers a really broad problem space reasonably well.


Model output reflects on your input, and the effect is self reinforcing over the course of a whole conversation. Color you add around a problem influences the model behavior.

A "dumber"/vague framing will get a less insightful solution, or possibly no solution at all.

I don't even necessarily think this is a critical flaw - in general it's just the model tuning it's responses to your style of prompt. People utilize LLMs for all kinds of different tasks, and the "modes of thought" for responding to an Erdos problem versus software engineering versus a more human/soft skills topic are all very different. I think the "prompt sensitivity" issue is just coming bundled along with this general behavior.


Keeping a pristine context is so important that I used two separate conversations whenever doing something meaningful. One is the main task executor, and the other is for me to bounce random problems, thoughts, and ideas off of while doing everything to keep a pristine context in the executor instance.

It's sort of an agentic loop where I am one of the agents


Does non-artificial intelligence have clean instruction/data separation?

I found the Copilot harness generally more buggy/disfunctional. After seeing a "long" agent response get dropped (still counts against usage of course) too many times I gave up on the product.

It doesn't matter how competent the actual model is, or how long it's able to operate independently, if the harness can't handle it and drops responses. Made me think are they even using their own harness?

At least Anthropic is obviously dogfooding on Claude Code which keeps it mostly functional.


I only ever used Copilot through OpenCode and for a while it was a crazy good deal. Quite possibly two orders of magnitude cheaper than API credits.

It was great while it lasted.


Is Composer 2 a bad model because Cursor are bad at training models, or because they are compute constrained? This deal will provide the answer to that question.

I think it also represents a bet that in some sense Cursor's model capabilities are resource limited rather than talent limited. If that's true, $60B will end up being a bargain. If not true, well it's an expensive lesson but that's the nature of things.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: