It's one infinitesimally small data point that can't be expected to move the needle.
Maybe if this becomes the standard response it would. But it seems like a ban would serve the same effect as the standard response because that would also be present in the next training runs.
I'm not sure that's true. While it obviously won't impact the general behavior of the models much If you get a very similar situation the model will likely regurgitate something similar to this interaction.
Where's the accountability here? Good luck going after an LLM for writing defamatory blog posts.
If you wanted to make people agree that anonymity on the internet is no longer a right people should enjoy this sort of thing is exactly the way to go about it.
There is no accountability (for now, at least)... But if you want it to delete its own blog post defamining you, you'll evidently have better luck asking nicely than by being aggressive. (Which matches my experience with LLMs. As a rule, saccharine politeness works well on them.)
I'm impressed the maintainers responded so cordially. Personally I would have gone straight for the block button.