If you want to learn category theory in a way that is more orthodox, a lot of people recommend Tom Leinster’s Basic Category Theory, which is free[1]. I’m going to be working through it soon, but the bit I’ve skimmed through looks really good if more “mathsy” than things like TFA. It also does a better job (imo) of justifying the existence of category theory as a field of study.
Disclaimer for the book, and for category theory in general: most books are optimized for people who already master mathematics at an undergraduate level. If you're not familiar with algebraic structures, linear algebra, or topology, be prepared to learn them along the way from different resources.
Category theory is also not that impressive unless you already understand some of the semantics it is trying to unify. In this regards, the book itself presents, for example, the initial property as trivial at first hand, unless you notice that it does not simply hold for arbitrary structures.
I recently watched a video about death cap mushrooms (the deadliest, supposedly), and apparently about 80% of people still survive (requires prompt medical treatment), not that they would want to repeat the experiment. Apparently, the mushrooms even taste good.
Anyway, edible normally means "safe to eat," not just "possible to eat." (As you are no doubt aware). IIRC, Elmer's glue is considered safe to eat though not necessarily appetising.
When that incident first happened and was on the news it was so weird.
Did she really expect to get away with that? It seemed so obvious and her attempts to not be culpable were terrible.
Reading that, there's a strong implication she tried to poison her husband once already, and that information was not allowed into this case!
Also, apparently she inherited $2 million?! Actually it's a little weird that she gets a page long "Early life and background" style section. Lots of public people have shorter ones. That's somewhat uncomfortable.
I was taught “Edible (fit to be eaten as food) vs Eatable (capable of being chewed up and swallowed)” but modern usage seems to treat them as synonyms (the former just being more pleasant to eat than the latter).
No more pedantic than the comment I was replying to. My advice would be not to use "eatable" at all because others will just think you're saying edible incorrectly.
> Even if you buy the idea that Kalshi is a prediction market whose mechanism is gambling but whose product is accurate predictions, you don't have to buy the idea that insider trading is a good thing.
Yes, and furthermore even if you’re one of those people who think insider trading in prediction markets is a good thing [1] that doesn’t somehow make it not illegal. The DoJ seems to be pursuing the theory that it constitutes wire fraud, which since “everything is wire fraud”, seems possible.[2] The CFTC has also claimed jurisdiction, which isn’t surprising since it claims jurisdiction over pretty much everything. If true this would mean some of the commodities trading regulations could be used as well, although insider trading rules in the US around commodities are generally less stringent than say for equities. In Europe I’m pretty confident that the EU market abuse regulations would cover insider trading in prediction markets, and make insider trading market abuse as it would constitute trading on material non-public price sensitive information. (European insider trading rules are stricter than the US in general).
[1] the standard argument in favour of this is not one I agree with, but people say that the benefit is that the inside information is revealed by people acting on it in the market and that this therefore benefits the non-insiders. How much you buy into this idea depends on how much you feel that non-insiders benefit from paying insiders for this more accurate price.
Agree. Additionally, it’s really disheartening that people do this with Erdos problems specifically. They are not major research questions in mathematics, but were intended as little conjectures that people could use as a way into serious number theory with a small cash reward and a little bit of minor fame for being the person who did the work to solve one of them. They are not things where the solution itself provides an amazing amount of insight or moves the frontier of mathematics forward particularly.
So what is happening now is people now are nuking and paving the whole space with AI to prove their model can do maths, and we are all poorer for having this nice thing ruined in this way.
Number theorist Jared Lichtman says this AI proof is from "The Book", the highest compliment one can give. He also says:
> I care deeply about this problem, and I've been thinking about it for the past 7 years. I'd frequently talk to Maynard about it in our meetings, and consulted over the years with several experts (Granville, Pomerance, Sound, Fox...) and others at Oxford and Stanford. This problem was not a question of low-visibility per-se. Rather, it seems like a proof which becomes strikingly compact post-hoc, but the construction is quite special among many similar variations.
> The conjecture is 60 years old and many experts had consulted on the problem, making partial progress. I mentioned this to @thomasfbloom, and he replied: "perhaps the first Book Proof from AI?"
Terence Tao says:
> In any case, I would indeed say that this is a situation in which the AI-generated paper inadvertently highlighted a tighter connection between two areas of mathematics (in this case, the anatomy of integers and the theory of Markov processes) than had previously been made explicit in the literature (though there were hints and precursors scattered therein which one can see in retrospect). That would be a meaningful contribution to the anatomy of integers that goes well beyond the solution of this particular Erdos problem.
- He previously posted on X on the 2025/10/17 the following:
> Hi, as the owner/maintainer of http://erdosproblems.com, this is a dramatic misrepresentation. GPT-5 found references, which solved these problems, that I personally was unaware of. The 'open' status only means I personally am unaware of a paper which solves it. [1]
> GPT-5 has been a very useful tool in searching the literature, and this has been a valuable addition to the website. Its literature searching ability is already useful and impressive enough, no need to describe it as something it's not! [2]
How we got git was cvs was totally terrible[1], so Linus refused to use it. Larry McEvoy persuaded Linus to use Bitkeeper for the Linux kernel development effort. After trying Bitkeeper for a while, Linus did the thing of writing v0 of git in a weekend in a response to what he saw as the shortcomings of Bitkeeper for his workflow.[2]
But the point is there had already been vcs that saw wide adoption, serious attempts to address shortcomings in those (perforce and bitkeeper in particular) and then git was created to address specific shortcomings in those systems.
It wasn't born out of just a general "I wish there was something easier than rebase" whine or a desire to create the next thing. I haven't seen anything that comes close to being compelling in that respect. jj comes into that bucket for me. It looks "fine". Like if I was forced to use it I wouldn't complain. It doesn't look materially better than git in any way whatsoever though, and articles like this which say "it has no index" make me respond with "Like ok whatever bro". It really makes no practical difference to me whether the VCS has an index.
[1] I speak as someone who maintained a CVS repo with nearly 700 active developers and >20mm lines of code. When someone made a mistake and you had to go in and edit the repo files in binary format it was genuinely terrifying.
[2] In a cave. From a box of scraps. You get the idea.
To be fair the "shortcomings" that spurred it on mainly were the Samba guys (or just one) reverse-engineering Bitkeeper causing the kernel free license getting pulled, which caused Linus to say "I can build my own with blackjack and pre-commit hooks" and then he did, addressing it toward his exact use case.
It gained tons of popularity mainly because of Linus being behind it; similar projects already existed when it was released.
When I tried both at that time hg was just really slow so I just adopted git for all my personal projects because it was fast and a lot better than cvs. I imagine others were the same.
Mercurial and Git started around the same time. Linus worried BitMover could threaten Mercurial developers because Mercurial and BitKeeper were more similar.
I mean we have one extreme genius who showed promise early and remained exceptionally productive in mathematics for a long career: Leonhard Euler.
"Euler's work averages 800 pages a year from 1725 to 1783. He also wrote over 4500 letters and hundreds of manuscripts. It has been estimated that Leonhard Euler was the author of a quarter of the combined output in mathematics, physics, mechanics, astronomy, and navigation in the 18th century, while other researchers credit Euler for a third of the output in mathematics in that century"
But of course everyone is interested in the "what if" question of what might have happened had a particular person not died young:
- What if Galois hadn't died in a duel?
- What if Niels Henrik Abel hadn't died of tuberculosis?[1]
- What if Emmy Noether hadn't died of cancer so soon after she started teaching at Bryn Mawr and Princeton?
[1] This one is one of the saddest stories in maths to my view. Abel died in his 20s basically because of extreme poverty and 2 days after he died a letter arrived from one of his friends who had got him a teaching position that would have made him financially secure. Hermite said of Abel "Abel has left mathematicians enough to keep them busy for five hundred years."
> long tradition of naming theorems after the second person after Euler to discover them.
Some of my favourite examples of this are:
- The "Lambert W" function, discovered by Euler to solve a problem Lambert couldn't solve
- "Feynman's trick" of differentiating under the integral[1]. Invented by Euler. Done by Feynman because he says in his autobiography he learned it from "Advanced Calculus" by Cook. So now it's called "Feynman's trick". Like dude it had been around for 250 years before Feynman did it.
- "Lagrange's notation" for derivatives. Yup. Euler.
- The "Riemann Zeta function". Of course discovered and first studied by Euler. Riemann extended it to complex numbers though.
Another example is William Kingdon Clifford, who also died too young, while having excellent chances of advancing mathematics.
James Clerk Maxwell died simultaneously with Clifford. Maxwell was not so young, but his death was also very premature.
Had not both Clifford and Maxwell died too soon, there would have been very good chances for the mathematical bases of the theory of physical quantities to be improved many decades earlier, possibly skipping over the incomplete vector theories of Gibbs and Heaviside, which while very useful in the short term for engineering, in the long term were an impediment in the development of physics.
This is very cool work but the author is labouring under a false premise about how axiomatic systems work:
> Every Lean proof assumes the runtime is correct.
No. Every valid Lean proof assumes that if the runtime/mathlib etc is correct, then it too is correct.
Tangentially also, most lean proofs are not dependent on whether or not the runtime has things like buffer overflows or denial of service against lean itself at all, because if I prove some result in Lean (without attacking the runtime) then a bug in the runtime doesn’t affect the validity of the result in general. It does mean however that it’s not ok to blindly trust a proof just because it only relies on standard axioms and has no “sorry”s. You also need to check that the proof doesn’t exploit lean itself.
[1] https://arxiv.org/pdf/1612.09375
reply