Hacker Newsnew | past | comments | ask | show | jobs | submit | dooglius's commentslogin

Agreed; e.g. if you prove something about the real numbers, the matter of how R is constructed out of your axiomatic system doesn't matter

The picture isn't quite so clean in the constructive context, which is what many of these proof systems are rooted in, e.g., https://mathoverflow.net/questions/236483/difference-between...

there are questions where the abstraction of real numbers becomes leaky, and some axioms (or their lack) poke through.

https://en.wikipedia.org/wiki/Axiom_of_choice#Real_numbers


The fact that terms like Aho-Corasick, PLDI, Go, etc. are properly capitalized, including if they begin sentences, but otherwise sentences are uncapitalized, makes me think it's an explicit LLM instruction "don't capitalize the start of sentences" rather than writing style.

ChatGPT also loves Aho-Corasick and seems to overuse it as an optimization fall back idea. ChatGPT has suggested the algorithm to me but the code ended up slowing down a lot.

ChatGPT was heavily RL'd on competitive programming in 2025, and aho-corasick is a traditional algorithm in the competitive programming space.

What's with this silly "all lower case" style lately?

Jack Dorsey's layoff message last month did the same thing.

Is it some kind of "Prove you're not an AI by purposely writing like an idiot" or something?


it's been common in the unix/c/lisp worlds since before you were born

when 6 bit ascii was all caps only, people got used to monocase, then 7-bit lowercase ascii blew in a fresh wind.


What a condescending comment. It's too bad I can't vote you down.

not anti-capitalist, just a subtle preference away from capitalism

No, this is just what that writing style looks like. Names and acronyms are usually capitalized normally.

I keep being surprised by the magnitude of the disconnect between this place and the other circles of hell. I'd have thought the Venn diagram would have a lot more overlap.


Oh the venn diagram might be big, the HN population just has a lot of variance I think, and is less of a community per se. I don't doubt what you're saying, though in the grand scheme of things, I think the "too lazy to hit shift" population dwarfs any of these groups.

Yeah, I can agree with the variance. Except that the "too lazy to hit shift" community is not something I would ever confuse with people writing long form articles about their regex engine research that they'll be presenting at PLDI.

The confusion might be understandable for people who have never encountered this style before, but that's still a very uncharitable take about an otherwise pretty interesting article.


It looks like that's about syntactic ambiguity, whereas the parent is talking semantic ambiguity


Is z3 competitive in SAT competitions? My impression was that it is popular due to the theories, the python API, and the level of support from MSR.


Funnily, this was precisely the question I had after posting this (and the topic of an LLM disagreement discussed in another thread). Turns out not, but sibling comment is another confounding factor.


Do you have reason to believe that you have a reliable way in these cases of determining whether the comment is generated?


Having been reading generated comments almost daily for over three years now, I have a pretty good sense of it. There's a bunch of signals: how new the account is; how the comments look visually (the capitalization and layout of the paragraphs, particularly when all of one user's comments are displayed in a list). Em-dashes and short, emphatic sentences, make it more obvious of course.

There are cases that are more borderline; usually when someone has used a translation service or has used an LLM to polish up a comment they wrote themselves. For these ones there's less certainty, and whilst we discourage them, we're not as rigid in our aversion to them or as eager to ban accounts that do it.

But ones that are entirely generated are still pretty easy to spot, even just from visual appearance.


The vast majority of human evolution happened in non-humans


Sure - though the tuned behaviour around turning the innate immune system up and down is probably dominated by the more recent part of that long history.


Don't take this the wrong way, but it seems like you did not actually learn to cope


Cope, yes. Thrive, no. Surviving forty seven years alone at least counts as coping.


Let me put it this way, you do not seem to have learned to cope very well. Actually focusing on and learning to cope was a big improvement for me.


Can you elaborate on your hypothesis? Would them being "still there" imply the possibility of treatment to enable their effectiveness?


What exactly is hyper-skeptical about them?


How does WSL1 do it then?

Anyway, the section you are quoting makes no claim as the the permitted granularity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: