I fed ChatGPT part of the introductory paragraph of that paper and instructed it to continue. It gave me this:
> In this light, the activity of programming becomes less an act of issuing commands and more an act of communication. The computer is, after all, an obedient but uncomprehending servant; it will execute whatever precise instructions we provide. But our colleagues, our future selves, and the broader community of readers are not so easily satisfied. They demand clarity, intention, and narrative. A program, then, should be structured not merely for execution, but for reading—its logic unfolding in a manner that mirrors the way one might naturally explain the solution to another person.
> This shift in perspective has practical consequences. When we write with exposition in mind, we are compelled to confront ambiguities that might otherwise remain hidden. Vague assumptions must be made explicit; convoluted steps must be reorganized into simpler, more digestible ideas. The discipline of explaining a program often leads to improvements in the program itself, since confusion in the prose is frequently a symptom of confusion in the underlying design.
Fascinating technology. I would not be able to immediately tell this was AI generated. So these models can in some cases produce text that doesn't immediately set off alarm bells. As an avid reader and writer I'm not really sure what to make of it. I don't want to consume AI generated art or literature because it's completely besides the point, but in the future will we even be able to tell? How do we even know if anyone around us is real? Could they just be sufficiently advanced LLM's, fooling us? Am I the only human in the matrix?
Whether or not one can tell it's AI generated, one can certainly tell it's not Knuth. For one thing, the writing style is very different. Not that there haven't been other great computer scientists who may have written in this style, but it definitely doesn't sound like Knuth (there is no "being a bit cheeky" for sure). But also, the ideas it has produced are simply more of the same; kind of a natural progression / what a typical grad student may write. Knuth always has something new and surprising to say in every paragraph, he wouldn't harp on a theme like this. Also he mixes “levels” between very high and very low, while the paragraphs you quoted stay at a uniform level.
But of course, writing as good as a grad student's (just not the particular delightful idiosyncratic style of a specific person) is still very impressive and amazing, so your concerns are still valid.
Knuth's paper is 100% in the training set, so while your result is decent, it's undoubtedly tainted. But let's look at the output anyway:
> ...the activity of programming becomes less an act of issuing commands and more an act of communication
directly contradicts:
> The computer is, after all, an obedient but uncomprehending servant...
If programming becomes "an act of communication" how can an "uncomprehending servant" make heads or tails of what I'm telling it? And I get that the two aren't exactly contradictory here, but this implied claim would certainly require at least a throwaway sentence.
> When we write with exposition in mind, we are compelled to confront ambiguities that might otherwise remain hidden.
I'm being a bit nitpicky, but this is a non-sequitur; we aren't necessarily required to confront any ambiguities, even when we're trying very hard to be expository. The counter-examples I'm thinking of at the moment are contrived (amnesia, my four-year-old niece trying to tell a story, etc.) but I mainly take issue with the word "compelled."
> its logic unfolding in a manner that mirrors the way one might naturally explain the solution to another person
People explain things in all kinds of weird circuitous ways, so while this (as all AI-generated output) seems interesting prima facia, it's actually kind of a dud when you think about it for more than 5 seconds.
> Vague assumptions must be made explicit; convoluted steps must be reorganized into simpler, more digestible ideas.
and
> ...ambiguities that might otherwise remain hidden...
directly contradicts:
> ...whatever precise instructions we provide
It seems like the computer can somehow encode "ambiguities" and "vague assumptions" as "precice instructions." How, exactly, does that work? (Spoiler: it doesn't, it's gibberish.) On the other hand, if you read Knuth's first few paragraphs, he clearly has a point in mind; I'd even say he's being a bit wordy, but never equivocating. In fact, by the fourth paragraph, he's almost giddy with excitement.
> In this light, the activity of programming becomes less an act of issuing commands and more an act of communication. The computer is, after all, an obedient but uncomprehending servant; it will execute whatever precise instructions we provide. But our colleagues, our future selves, and the broader community of readers are not so easily satisfied. They demand clarity, intention, and narrative. A program, then, should be structured not merely for execution, but for reading—its logic unfolding in a manner that mirrors the way one might naturally explain the solution to another person.
> This shift in perspective has practical consequences. When we write with exposition in mind, we are compelled to confront ambiguities that might otherwise remain hidden. Vague assumptions must be made explicit; convoluted steps must be reorganized into simpler, more digestible ideas. The discipline of explaining a program often leads to improvements in the program itself, since confusion in the prose is frequently a symptom of confusion in the underlying design.
Fascinating technology. I would not be able to immediately tell this was AI generated. So these models can in some cases produce text that doesn't immediately set off alarm bells. As an avid reader and writer I'm not really sure what to make of it. I don't want to consume AI generated art or literature because it's completely besides the point, but in the future will we even be able to tell? How do we even know if anyone around us is real? Could they just be sufficiently advanced LLM's, fooling us? Am I the only human in the matrix?