Hacker Newsnew | past | comments | ask | show | jobs | submit | DblPlusUngood's commentslogin

Yes.


A better example: a page fault for a non-present page.


At university we designed an architecture[1] where you had to test for page not present yourself. It was all about seeing if we could make a simpler architecture where all interrupts could be handled synchronously, so you'd never have to save and restore the pipeline. Also division by zero didn't trap - you had to check before dividing. IIRC the conclusion was it was possible but somewhat tedious to write a compiler for[2], plus you had to have a trusted compiler which is a difficult sell.

[1] But sadly didn't implement it in silicon! FPGAs were much more primitive back then.

[2] TCG in modern qemu has similar concerns in that they also need to worry about when code crosses page boundaries, and they also have a kind of "trusted" compiler (in as much as everything must go through TCG).


> where you had to test for page not present yourself.

I would think that implies you need calls “lock this page for me” and “unlock this page for me”, as using a page after getting a “yes” on “is this page present?” is asking for race condition problems.

That would make this conceptually similar to the Mac OS memory manager, with a difference that didn’t use pages but arbitrary sized memory allocations (https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_manageme...)


Interesting. So what happens if the program does not test for the page? Doesn't the processor have to handle that as an exception of sorts?


This is why you need a trusted compiler. Basically it's "insecure by design" since the whole point of this optimization is to avoid any asynchronous exception so there's no need to implement that in the pipeline. The machine code must be forced somehow to implement these checks.

There have been architectures which have required a trusted compiler (eg. the Burroughs mainframes) or a trusted verifier (the JVM, NaCl). But it certainly brings along a set of problems.

It's unclear from here whether this is even an optimization. It looked a lot more compelling back in the mid 90s.


Could be handled by having the CPU switch to a different process while the kernel CPU faults the data in.


The CPU doesn't know what processes are, that's handled by the OS. So there still needs to be a fault.


You’re thinking about computer architecture as designed today. There’s no reason there isn’t a common data structure defined that the CPU can use to select a backup process, much how it uses page table data structures in main memory to resolve TLB misses.


X86 had the concept of harware assisted context switching. Yet another unsued feature dropped in 64bit mode.


It was slow so operating system devs didn't use it. So it was removed. Probably becouse hardware saved, properly, all registers and software can save only needed few (and sometimes miss something).

In effect: we don't know how secure it was...

But if it was good and Intel removed it then why Intel keeps so many useless crap in ?? Good parts - remove, bad - needed for backward compatibility... Can, finally, someone tell backward compatibility with WHAT ? DOS 4.0 ? Drivers for pre-winonly modems using plain ISA or PCI slots ??

Or maybe just like with EVE Online code (few years ago?) - no one anymore knows how some parts works...


Just to make it explicit for the people having trouble, the mechanism for switching processes in a pre-emptive multitasking system is interrupts.


In some cases, yes (if there are other runnable threads on this CPU's queue).


Don't forget gltron!


You are absolutely right. They are used everywhere in the Linux, OpenBSD, and FreeBSD kernels, and likely many other kernels.

Replacing large linked lists with arrays is rarely an actual win. With an array, insertion and deletion become far more expensive, virtual memory is more likely to become fragmented or grow monotonically, and the cache misses avoided are almost certainly irrelevant to total performance.


What coherence is lacking? OpenBSD supports msync(2), which is the only POSIX mechanism I know of for ensuring coherency between read(2) and shared file mappings. Otherwise relying on unspecified behavior sounds dangerous.


Oh, come on. Every other system in common use is fully coherent. POSIX allowing OpenBSD's behavior doesn't make that behavior a good idea or a quality implementation.


OpenBSD's choice is arguably reasonable, given their prioritization of security, since it reduces opportunities for user programs to corrupt kernel memory.

What is the problem with OpenBSD's plan for coherency? Why is the burden of explicitly calling msync(2) too much?


> reduces opportunities for user programs to corrupt kernel memory

I don't see how it could. Kernel data structures don't go on pagecache pages.

> OpenBSD's choice is arguably reasonable

At a human level, the OpenBSD people have spent way too much time coming up with rationalizations for their obsolete VM design to back down now. Whether OpenBSD's VM subsystem is good or not, their pride will force them to keep claiming that it's good, practically forever.


> I don't see how it could. Kernel data structures don't go on pagecache pages.

Kernel data structures could end up on a pagecache page: all it takes is a reference counting bug and the page could be reallocated in the kernel heap, which is directly mapped by user space. Keeping user-mapped pages and pagecache pages distinct makes this less likely.

I am otherwise not convinced that there is an actual problem with OpenBSD's coherency plan.


OpenSSH does have confirmation: use the '-c' switch to ssh-add.

https://man.openbsd.org/ssh-add


Or "AddKeysToAgent confirm" in ~/.ssh/config


Waaaaaaat?! That could definitely be better known.


TIL :|


Hang in, there.


This could be a sane default.


Hm, anything similar for gpg agent (both for gpg, and as a stand-in for ssh-agent)?

Ed: looks like I need to edit my sshcontrol-file

https://www.gnupg.org/documentation/manuals/gnupg/Agent-Conf...


Is recovery of a shared memory queue after one of the workers crashes even possible, in general? (what if the worker crashed before releasing a lock?)


I’m not sure how this is usually done but I’d avoid locks at nearly any cost and try to use a lock-free spmc queue such as https://github.com/tudinfse/FFQ


Sounds like a fun project!

Isn't seL4's multicore support either unverified or limited (i.e. shared memory is forbidden)? Is your platform single-threaded then?


Yes. In theory, maybe it's useful to have more flexibility to trade-off durability for performance (smaller quorum size hopefully reduces deciding latency [unless your small quorum contains the straggler!]) for specific kinds of data in the same replica group.

In practice though, it seems easier just to run separate instances of Paxos.


Instead of an ad hominem, please clearly explain the main way that the example Go code is less safe than the C example code.


I didn't mean the C example to be applied in C compilers, I meant having the first example in Golang itself, enums are not just constants, they are type safe constants, you can't just use any other constants in some switch-case and get away with it, that's the raison d'etre of enums in the first place

I didn't know that enums are so controversial, unnecessary and equivalent to just constants, but maybe a look of how it's done in any other language can give you a clear difference between enums and constants

https://doc.rust-lang.org/1.30.0/book/2018-edition/ch06-01-d...

https://doc.rust-lang.org/rust-by-example/custom_types/enum....


You first asked for C-style enums, which are not type safe constants (but nothing more than integer constants), then asked for type-safe constants, and then, you say they are not enough either and you want full-blown rust-style enums, which are not enumeration types but true sum types.

Thing is, as soon as you provide users with one more feature, some of these users want even more features: "yeah, enums are great, but I would like to have type safety with them; oh, type safety is important, but why not have sum types, after all? Oh, now that we have sum types, why not add pattern-matching?"

All of these features are great, but they make the language harder to master, and tooling harder to write. This is a tradeoff, there are many languages that implement all the features its users want (I can think of C++, Rust, probably C# too), why not let other language designers try another way?

I have to admit I wouldn't dislike an enum construct in Go, just syntactic sugar that would be equivalent to the type foo int + const block, but I certainly won't push for it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: