Hacker Newsnew | past | comments | ask | show | jobs | submit | some_furry's commentslogin

No, but they decide the moderation policy that incentivizes the content produced (by nature of selecting which users feel comfortable using their product and which do not).

For example, I do not feel comfortable using the same platform as people that post child sexual abuse material. X's Grok is infamous for generating such content on demand. I opt to use platforms that do not have this as a first-class feature. X has selected against my participation and for the participations of people who hold a contrary opinion to me. Even if Grok stops producing CSAM, that selection bias will persist.


Can you explain a bit more what you mean by "secure" in the context of "actual revocations"? The oxymoronic nature isn't self-evident enough for me to catch your intended meaning before my first cup of coffee.

How can you falsely revoke a certificate? If an attacker can revoke a certificate, either by falsifying the signature or possessing the necessary key material, it is by definition not a trustworthy certificate anymore, and the revocation is therefore correct.

In the public CA PKI, it is the CA which has the power to revoke their issued certificates. In other systems, it can be the private key for the certificate itself. In either case, the certificate is not to be trusted anymore.

Revocation is the least of your worries should your signature algorithm be broken in the future.


> How can you falsely revoke a certificate?

If you don't have the private key on hand to issue a revocation, your next best bet is to find a parser bug that convinces some subset of user agents that the valid certificate you don't hold the private key for is actually invalid. (Hence, a false revocation.)

And then, get those users into the habit of accepting invalid/revoked certificates if they want to access the site. And then after weeks of battling against their patience or endurance, then you offer an invalid cert for a MitM.

That's how I was thinking of it, anyway.


If you receive a forged crl, in the worst case it will revoke certificates that you can't trust anyway. Even if it says "certificate X is still good", that's equivalent to receiving no crl.

Which governments are you thinking of?

Another thing that I think Europeans often fail to take into consideration is scale.

USA: 9,147,590 km^2

Switzerland: 41,295 km^2

That's a factor of 221.5 to 1.


Yes but if you compare urban areas (where 80% of people live in both continents) in US and Europe it's not massively different (Europe maybe 2-4 more dense depending on the country/city).

Obviously you're not going to lay fibre to the last 1% of population in the US (for the most part).


As one of the last 1% of population with fiber, your take on population stats in the US is wildly off.

What?

Quantum computers don't break SHA256, nor would this attack be "reasonably attributable" to a SHA256 break.

In fact, if you have funds in a wallet that has never spent a transaction before (only received), it's still reasonably difficult for a CRQC to steal your funds. The trick is, the moment you've ever spent a transaction, now your public key is known (and therefore breakable).

(Yes, I'm aware of the literature on quantum search vs hash functions, but it's not a complete break like RSA or ECC.)


The Other Side = the "afterlife" apparently


No.

Getting a crypto module validated by FIPS 140-3 simply lets you sell to the US Government (something something FedRAMP). It doesn't give you better assurance in the actual security of your designs or implementations, just verifies that you're using algorithms the US government has blessed for use in validated modules, in a way that an independent lab has said "LGTM".

You generally want to layer your compliance (FIPS, etc.) with actual assurance practices.


And the people who repeat such statements uncritically to their reports will also get mildly annoyed when they have no Earthly clue what that actually means.


> "Bloats record sizes"

> - ECC sigs can be sent in a single packet.

It's 2026. If you're deploying a cryptosystem and not considering post-quantum in your analysis, you'd best have a damn good reason.

ECC signs might be small, but the world will be moving to ML-DSA-44 in the near future. That needs to be in your calculus.


True, but DNSSEC doesn't need to worry about forward secrecy and it doesn't need quantum protection until someone can start breaking keys in under a year. Hopefully we will find more efficient PQC by then.


People tried to move DNSSEC from RSA to ECC more than a decade ago. How'd that migration go? If you like, I can give you APNIC's answer.


RSA is still fine given that you can't break it in a year and we aren't worried about forward secrecy.

Also, I worked for a DNS company. People stopped caring about ulta-low latency first connect times back in the 90s.

You are clearly very proud of your work devaluing DNSSEC. But pointing to lack of adoption doesn't make your arguments valid.


> People stopped caring about ulta-low latency first connect times back in the 90s.

They did? That's certainly going to be news to the people at Google, Mozilla, Cloudflare, etc. who put enormous amounts of effort into building 0-RTT into TLS 1.3 and QUIC.


I did a large data analysis of DNS caching times across the web. Hyperscalers are the only ones who care and they fix that with insanely long DNS caching.


I'm not trying to just nitpick you here, but, the message I was responding to said "People stopped caring about ulta-low latency first connect times back in the 90s.".

It seems to me that you're saying here that (1) the hyperscalers do care but (2) it's under control. I'm not necessarily arguing with (2) but as far as the hyperscalers go: (1) they drive a lot of traffic on their own (2) in many cases they care so their users don't have to.


Sorry, the point I was trying to make is that this isn't a problem operationally.

Hyperscalers go to crazy lengths because they can measure monetary losses due to milliseconds of less view time and it's much easier when they have distributed cloud infrastructure anyway. But it's not really solving a problem for their customers. At least when I worked in DNS land ... latency micro-benchmarking was something of a joke. Like, sure, you can shave off a few tens of milliseconds, but it's super expensive. If you want to reduce latency, just up your TTL times and/or enable pre-fetching.

As a blocker for DNSSEC ... people made arguments about HTTPS overhead back in the day too. DoH also introduces latency, yet people aren't worried about that being a deal killer.


> As a blocker for DNSSEC ... people made arguments about HTTPS overhead back in the day too.

They did, and then we spent an enormous amount of time to shave off a few round trip times in TLS 1.3 and QUIC. So I'm not sure this is as strong an argument as you seem to think it is.

> DoH also introduces latency, yet people aren't worried about that being a deal killer.

Actually, it really depends. It can actually be faster. Here are Mozilla's numbers from when we first rolled out DoH. https://blog.mozilla.org/futurereleases/2019/04/02/dns-over-...

And here are some measurements from Hounsel et al. https://arxiv.org/abs/1907.08089


> They did, and then we spent an enormous amount of time to shave off a few round trip times in TLS 1.3 and QUIC.

But if it's worth doing for HTTP, why not for DNS?

> Actually, it really depends. It can actually be faster. Here are Mozilla's numbers from when we first rolled out DoH.

Oh fun!


> But if it's worth doing for HTTP, why not for DNS?

I'm sorry I don't understand your question.


The engineering effort! ECC solves the theoretical concerns around latency anyway yet we have people arguing that it shouldn't be done. But if it was worth making HTTPS faster to secure HTTP, why not DNS?


Ah, I see what you're asking.

You're not going to find this answer satisfying, I suspect, but there are two main reasons browsers and big sites (that's what we're talking about) didn't bother to try to make DNSSEC faster:

1. They didn't think that DNSSEC did much in terms of security. I recognize you don't agree with this, but I'm just telling you what the thinking was. 2. Because there is substantial deployment of middleboxes which break DNSSEC, DNSSEC hard-fail by default is infeasible.

As a consequence, the easiest thing to do was just ignore DNSSEC.

You'll notice that they did think that encrypting DNS requests was important, as was protecting them from the local network, and so they put effort into DoH, which also had the benefit of being something you could do quickly and unilaterally.


I'm not unaware of this and I agree that WebPKI has greatly reduced global risk. New DNS tech takes a lot longer to implement but that doesn't mean we should kill DNSSEC support like the trolls insist upon!

Why would Let's Encrypt not also be interested in safeguarding DNS, SSH, BGP, and all the others? Those middle boxes will have to get replaced someday and we could push for regulation requiring that their replacements support DNSSEC. These long-term societal investments are worth making and it would enable decentralized DNS.

I'm also concerned that none of this will happen if haters won't stop screaming, "DNSSEC doesn't do anything but ackchyually harms security!".

(@tptacek: please stay out of this comment thread)


I’ve asked elsewhere what threat models DNSSEC is solving for me.

Where are all the attacks happening targeting sites that don’t use DNSSEC?


HTTPS solved a bunch of real world threat models that were causing massive security issues. So we collectively put a bunch of engineering time into making it performant so that we could deploy it everywhere with minimal impact on UX and performance.


DNSSEC also solves a bunch of real world threat models that do cause massive security issues. I think we should put that effort into DNS as well.


Somehow they cause these massive security issues without impacting the 95%+ of sites that haven't used the protocol since it became viable to adopt a decade and a half ago.

It's just a very difficult statistic to get around! Whenever you make a claim like this, you're going to have address the fact that basically ~every high-security organization on the Internet has chosen not to adopt the protocol, and there are basically zero stories about how this has bit any of them.


Does it?

I run a bunch of websites personally. I have ACME-issued TLS certificates from LetsEncrypt. I monitor the Certificate Transparency logs, and have CAA records set.

What's the threat model that should worry me, where DNSSEC is the right improvement?


I don't know about "valid". "Correct", maybe? "Prescient"?


I wonder if this is the start of a trend or just a one-off?


Probably a one off? Instagram’s e2ee was opt-in from the start- and meanwhile Facebook Messenger is now “e2ee for everyone” and none of this is affecting the main e2ee messaging apps people use - WhatsApp, Signal, and iMessage


TikTok replied recently it wouldn't encrypt its messages either, citing user security as reason.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: