Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did go that far down, and there is a purpose. Reducing the scope of attack to "you must own a fab" is pretty great, honestly. Sure, it won't stop a perfectly placed nation-state from mounting a bespoke attack just for you by twiddling silicon doping on a wafer... But that's quite a bit harder and more expensive than "install an SMM rootkit".

And, if you do care about trustable hardware... There are bootstrapping and verification paths available there as well, depending on your threat model.

Or you can of course give up and declare all of computing fundamentally untrustable but still useful for some purposes. Like I said in the post, I'm glad for the existence of both the purists and the pragmatists in this space.



I think it is something of an exaggeration to say that the scope of attack has been reduced to "you must own a fab". At best, it is the scope of the bootstrap problem that has been reduced, but there is still the problem of securing and verifying all the source code for the software you are going to need to do something useful (including, but by no means limited to, the toolchain and the operating system which hosts it.)

Solving the hardest problem (or what appears to be it) does not mean that everything else is tractable. In this case, the sheer size of the problem means that it is beyond the scope of one person [1], so the problem becomes one of who you trust, not what you trust.

I think we all knew it was going to come down to this; how does bootstrap.org deal with it?

[1] I'm putting aside the problem of verifying the design of what the fab makes, of the fab itself, and the trustworthiness of the people building and operating it, which is, as you suggest, 'just' another heap of turtles.


I still don't understand in what scenario someone could trust their vendor's hardware but not firmware. Somehow the firmware is malicious but the hardware is trusted? Why/how? Either you're getting the product directly from a vendor you trust, or you're not. If you are, then the firmware and the hardware are one thing together. If you're not, you need your own fab. And mind you, whoever is supposedly intercepting your shipments (or whatever) doesn't need a fab to pull off any attack, so I'm not sure what the scope reduction is here...


https://www.extremetech.com/computing/173721-the-nsa-regular...

https://puri.sm/products/librem-key/

Interception and firmware replacement is a thing. It happens. One could thus trust the hardware but not the firmware.


Yes I remember it from the Snowden days, that's why I mentioned it myself. But I don't get the threat model. So supposedly the NSA planted something in your device's firmware. How exactly would it help you if you could "see" the manufacturer's firmware (say it was open-source)? You still wouldn't know what's running on the chip. Even if you flash it, the chip could just be lying in some part of the process. Conversely, the entire firmware could be encrypted and you could still verify it (without knowing what it's doing) if the chip had an un-tamperable-with "dump out a hash of my firmware" instruction to let you match against the manufacturer's provided hash. Or an instruction to verify that its hash is what you expect in a manner that can't be tampered with. Either way, I don't see how your knowledge of the firmware that's supposed to be on it is necessary or sufficient.


At the minimum, I'd want to be aware that the firmware is not what the manufacturer had intended to provide me. Perhaps it's not the NSA after me, but some other actor or competitor or ransomware agency.


Yeah and to do that you need some mechanism to check what's on the device. It wouldn't help you to have 'open' firmware since it still wouldn't tell you what's on the device.


I hadn’t presumed that the firmware needed to be Open, though. Just a mechanism to verify. Being open and having the ability to compile from source and installing it myself would be even better.


Keep in mind that in the 'Trusting Trust' example, the compiler has to be smart enough to realize that you are building another compiler (and only then insert the backdoor). I can imagine that a back-doored GCC would recognize when you are building another GCC, but it would be hard for an old version of GCC to recognize eg a modern LLVM or even GHC, I'd say.

Similar, the lower down you go, the harder it is to put that kinds of smarts in.

That might be one reason to stop at this point? (Not sure.)


I've found software (ie. firmware) tends to be more sloppily constructed, probably because it can be fixed later in the field.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: