> This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project.
It sounds serious and strict, but it applies to content that's 'clearly labelled as LLM-generated'. So what about content that isn't as clear? I don't know what to make of it.
My guess is that the serious tone is to avoid any possible legal issues that may arise from the inadvertent inclusion of AI-generated code. But the general motivation might be to avoid wasting the maintainers' time on reviewing confusing and sloppy submissions that are made using the lazy use of AI (as opposed finely guided and well reviewed AI code).
Very next sentence lists the penalty for lying. So you can defraud the project, but only if you can walk the walk and talk the talk well enough for them to never notice you’re using an LLM. At that point it’s more effort than just complying with the policy.
Please correct me if I am wrong, but couldn't OpenAi just encrypt every conversation before saving them?
With each query to the model the full conversation is fed into the model again, so I guess there is no technical need to store them unencrypted. Unless, of course, OpenAi wants to analyze the chats.
The way I see it, the problem is that OpenAI employees can look at the chats and the fact that some NYT lawyer can look at it doesn't make me more uncomfortable.
Insane argumentation. It's like saying an investigator with a court-order should not be allowed to look at stored copies of letters, although the company sending those letters a) looks at them regularly b) stores these copies in the first place.
I’d also argue that Meitner and Noether deserve a mention.
Stepping outside my expertise, I’d argue Poppers description of what science and Pseudo-Science is, is essential.
Anyway great list!
reply