NIU policy on contributions using AI#
TL;DR
While we encourage, and are very grateful for, contributions to our software, we don’t accept any contributions generated mostly by AI.
Introduction#
Generative AI tools have developed quickly in the last few years, and have changed how many people write code. General purpose tools such as ChatGPT or specific code development tools such as Cursor now mean software development is quicker and more accessible than ever before. We only expect these developments to continue, and AI agents to play a crucial role in the way that we create software.
Many members of the NIU use AI in their workflows, as do many of our collaborators. We are also fortunate to have a large community of contributors to our tools. However, many potential contributions are low-quality and either require lots of work to get them into shape, or simply need to be rejected immediately as they do not solve the problem at hand. In many cases, these contributions appear to be mostly, or entirely, AI generated.
Code contributions#
It can be very difficult to contribute to an open-source repository for the first time, and we are always happy to provide assistance to anyone trying to learn. However, it is our policy that we will not review any code contribution that is, or appears to be, mostly written by AI tools. These contributions look useful superficially, but often make poor design choices, and are difficult to maintain long-term. Sometimes the AI agent creates a solution that does not solve the problem in any way.
While the use of AI tools to help you write code is absolutely allowed, we expect all contributors to be responsible for their code. This means they should be able to understand every line of code they submit and explain their reasoning behind it. Reviewers may ask you questions to aid their own understanding of your code (not to test you). If you cannot understand every line, it is unlikely that whoever reviews it will be able to either!
If you submit some code and we incorrectly tell you that we think it’s AI generated, please let us know! There are various “tells” that code has been written by AI, but these are not perfect. If we’ve made a mistake, that’s on us, and we want to correct it.
Communication#
It is also tempting to use AI for general communication, whether this is in GitHub issues or pull requests, or in our Zulip Chat. Often LLMs add lots of unnecessary text, and may distort the meaning behind the message. For many contributors, English is not their first language, and it is of course fine to use tools to help correct your communication. However, please do not use LLMs to create entire messages that you send to us in any format. We want to hear what you think, not what an LLM thinks!
Action#
We hope this policy will help clarify our position. If we think that a code contribution or communication is AI generated, we will politely refer you to this page and close the issue or pull request (if relevant). If we have made a mistake, please tell us. As these tools progress, we will likely get it wrong a lot, and we want to know. The last thing we want is to deter any real, human contributors!
In the unlikely event that our policy is repeatedly ignored (e.g. multiple AI-generated pull requests), we may block that individual from our GitHub organisation and/or Zulip.