Hacker Newsnew | past | comments | ask | show | jobs | submit | bracewel's commentslogin

This is mainly actually for testing constant-time code, rather than doing proper memory tracking (see https://www.imperialviolet.org/2010/04/01/ctgrind.html for a slightly out-of-date description of this technique).


Oh, interesting. Thanks for sharing!

I guess there's also callgrind that may be useful for Gophers.


Useful for people -- believe it or not not all of us internalize the language we use as part of our identity (and species).


Author of the linked CL here: we added this mostly so that we could abuse the memory initialization tracking to test the constant-time-ness of crypto code (similar to what BoringSSL does, proposed by agl around fifteen years ago: https://www.imperialviolet.org/2010/04/01/ctgrind.html), which is an annoyingly hard property to test.

We're hoping that there are also a bunch of other interesting side-effects of enabling the usage of Valgrind for Go, in particular seeing how we can use it to track how the runtime handles memory (hopefully correctly!)

edit: also strong disclaimer that this support is still somewhat experimental. I am not 100% confident we are properly instrumenting everything, and it's likely there are still some errant warnings that don't fully make sense.


w.r.t. your edit: Is there anything the community at large can do to aid your efforts?


This is super cool. Hopefully it will flush out other issues in Go too.

But I wonder why its not trivial to throw a bunch of different inputs at your cyphering functions and measure that the execution times are all within an epsilon tolerance?

I mean, you want to show constant time of your crypto functions, why not just directly measure the time under lots of inputs? (and maybe background Garbage Collection and OS noise) and see how constant they are directly?

Also some CPUs have a counter for conditional branches (that the rr debuger leverages), and you could sample that before and after and make sure the number of conditional branches does not change between decrypts -- as that AGL post mentions branching being the same is important for constant time.

Finally, it would also seem trivial to track the first 10 decrypts, take their maximum time add a small extra few nanoseconds tolerance, and pad every following decrypt with a few nanoseconds (executing noops) to force constant time when it is varying.

And you could add an assert that anything over that established upper bound crashes the program since it is violating the constant time property. I suppose the real difficulty is if the OS deschedules your execution and throws off your timing check...


For the best crypto, you don’t want “within an epsilon”, you want “exactly the same number of CPU cycles”, because any difference can be measured with enough work.


Feeding random inputs to a crypto function is not guaranteed to exercise all the weird paths that an attacker providing intentionally malicious input could access. For example, a loop comparing against secret data in 32 bit chunks will take constant time 99.99999999% of the time, but is still a security hole because an attacker learns a lot from the one case where it returns faster. Crypto vulnerabilities often take the form of very specifically crafted inputs that exploit some mathematical property that's very unlikely from random data.


> But I wonder why its not trivial to throw a bunch of different inputs at your cyphering functions and measure that the execution times are all within an epsilon tolerance?

My guess is because the GC introduces pauses and therefor nondetermism in measuring the time anything takes.


I believe that Go can have GC disabled so that issue could be moot.


Because "constant time" here means algorithms that are O(1), rather than O(n). This isn't about wall-clock execution time, it's about avoiding the number of operations performed being based on attributes of the input.


I think it's more complicated than that. Certain things like branches, comparisons, and some other operations may not be constant time especially if you consider interactions with prior code. It's clearly possible just really difficult to get right.


There is a fifth, incredibly common, (arguably) non-malicious possibility.

You don't control the entirety of your web stack, and your hosting provider, or DNS provider, or someone else, has decided to be 'helpful' (either blindly, or due to some misconfiguration somewhere along the line) and issue a certificate on your behalf, as they are able to intercept CA validation traffic at the DNS, TLS, or HTTP layer.


Or indeed you're in a position of overlapping authority.

Universities will often have a cash-strapped organisation-wide IT Department (e-mail for english majors) and another layer of IT in certain academic departments (computer labs for CS students) and another layer of IT after that (the centre for machine learning paid for that cluster, of course they have full authority over it) - and often it's that third-level body that's getting all the grant funding and publishing all the papers.

The people who control www.example.edu might basically be the marketing department for their glossy student recruitment brochure. Who's to say they have authority over certificate issuance for datasets.ml.cs.example.edu ?


This is also arguably malicious, it is not for them to get certificates for your domain and snoop your traffic. I’d hope this is not ‘incredibly common’, in my opinion it is completely unacceptable.

The certificate certifies the holder is the owner of the domain, if some third party gets a certificate that’s fraudulent. In at least some jurisdictions making a fraudulent claim like that is illegal.


ah yes, I remember personally lying to congress and pushing false narratives in the media. oh wait, no, that wasn't me was it...


Note that CABF bylaws require a simple majority of browsers to vote positively for a ballot for it to pass, regardless of how CAs vote.


> CA/SSL specification does not change weekly...

The draft ACME specification does though.


DNSSEC is enforced at the resolvers.


Thanks. Since my zones are secured with DNSSEC that makes me feel a bit safer.


Yup.


Google did not write the HTTP/2 spec.

While Roberto Peon is one of the authors (and a core developer of the preceding SPDY protocol) saying that HTTP/2 is some Google invention is a bit of a punch in the face to the IETF and the two other authors who do not work at Google.


It allowed users to bypass Twitter blocks by tweeting at the bot while tagging users that block them, which seems pretty bad and was abused very quickly. Also it's a violation of the Twitter TOS.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: