Hacker Newsnew | past | comments | ask | show | jobs | submit | tptacek's commentslogin

I don't think you'll be able to provide evidence for the uneven distribution of IQ across nationalities.

The bug that caused Heartbleed was extremely obvious: read a u16 out of a packet, copy that many bytes of the source packet into the reply packet. If someone put that code in front of you in isolation you would spot it instantly (if you know C). The problem --- this is hugely the case with most memory safety bugs --- is that it's buried under a mountain of OpenSSL TLS protocol handling details. You have to keep resident in your brain what all the inputs to the function are, and follow them through the code.

That's not really what happened on this thread. Someone said something sensible and banal about vulnerability research, then someone else said do-you-even-lift-bro, and got shown up.

That's true in this particular case, but I was talking more about the general case.

And? They didn't put the bombs on your premises. Before "the service", you had bombs you didn't know about; after, you get to know about them.

But the service also tells criminals and adversaries about the bomb locations.

And? So do a variety of other services. Was it your impression that the criminals and adversaries were behind the 8 ball on this?

AI is reviving debates about vulnerability research that we thought we killed off in the 1990s.


This happens over and over in these discussions. It doesn't matter who you're citing or who's talking. People are terrified and are reacting to news reflexively.

Hi! Loved your recent post about the new era of computer security, thanks.

Thank you! Glad you liked it.

Personally, I’m tired of exaggerated claims and hype peddlers.

Edit: Frankly, accusing perceived opponents of being too afraid to see the truth is poor argumentative practice, and practically never true.


What's your point?

Almost all vulnerabilities are either direct applications of known patterns, incremental extensions of them, or chains of multiple such steps.

"No one bothered to look" is how most vulnerabilities work. Systems development produces code artifacts with compounding complexity; it is extraordinarily difficult to keep up with it manually, as you know. A solution to that problem is big news.

Static analyzers will find all possible copies of unbounded data into smaller buffers (especially when the size of the target buffer is easily deduced). It will then report them whether or not every path to that code clamps the input. Which is why this approach doesn't work well in the Linux kernel in 2026.


With a capable static analyzer that is not true. In many common cases they can deduce the possible ranges of values based on branching checks along the data flow path, and if that range falls within the buffer then it does not report it.

Be specific. Which analyzer are you talking about and which specific targets are you saying they were successful at?

Intrinsa's PREfix static source code analyzer would model the execution of the C/C++ code to determine values which would cause a fault.

IIRC they were using a C/C++ compiler front end from EDG to parse C/C++ code to a form they used for the simulation/analysis.

see https://web.eecs.umich.edu/~weimerw/2006-655/reading/bush-pr... for more info.

Microsoft bought Intrinsa several years ago.


I'm sure this is very interesting work, but can you tell me what targets they've been successful surfacing exploitable vulnerabilities on, and what the experience of generating that success looked like? I'm aware of the large literature on static analysis; I've spent most of my career in vulnerability research.

PREfix wasn't designed specifically for finding exploitable bugs - it was aimed somewhere in between Purify (runtime bug detection) and being a better lint.

One of the articles/papers I recall was that the big problem for PREfix when simulating the behaviour of code was the explosion in complexity if a given function had multiple paths through it (e.g. multiple if's/switch statements). PREfix had strategies to reduce the time spent in these highly complex functions.

Here's a 2004 link that discusses the limitations of PREfix's simulated analysis - https://www.microsoft.com/en-us/research/wp-content/uploads/...

The above article also talks about Microsoft's newer (for 2004) static analysis tools.

There's a Netscape engineer endorsement in a CNet article when they first released PREfix. see https://www.cnet.com/tech/tech-industry/component-bugs-stamp...


But what was the likelihood of this bug to be exploited by malicious actors?

I don't understand the question.

Yes, you can. I strongly encourage people skeptical about this, and who know at a high-level how this kind of exploitation works, to just try it. Have Claude or Codex (they have different strengths at this kind of work) set up a testing harness with Firecracker or QEMU, and then work through having it build an exploit.

It hasn't been true forever, but it has been true over the last 18 months or so.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: