With the advent of artificial intelligence in general, and generative artificial intelligence in particular, and the ubiquitous nature of AI tools, it is important we establish certain rules and guidelines when contributing to the EDC project.
The EDC project and its committers are in no way against the use of AI tools in general, or any one tool in particular. In fact we use AI ourselves in various capacities.
This document should give you, the contributor, some guidance in how AI tools should be used, and what to expect when ignoring the rules.
For general information about the use of generative AI refer to the Eclipse Handbook. Its rules and recommendations apply unless explicitly stated here otherwise.
Using AI tools
EDC is a very complex piece of software and a fundamental understanding of its inner workings are still necessary, even if AI is used in the day-to-day business. AI makes mistakes, and humans are still required to review its output. As with all things, the dose makes the poison and we caution against blindly using AI without checking, or relying on it too much for contributions. If anything, to err on the side of caution, the use of AI tools should be limited wherever possible.
Be mindful that the legal opinions on AI-generated content with regards to copyright are anything but well-established at this point, and do confirm with your employer whether they have any relevant policies in place that might prevent you from using AI.
AI is imperfect, it makes mistakes and can be outright hallucinating at times, so a human developer must always review its output before a contribution is made. AFter all, the human contributor is responsible for the content!
Attributing your work
When AI tools are used to generate significant parts of a contribution (PR, issue,…), contributors should indicate/annotate the generated sections, state which tool was used and - to the extent possible - summarize the model and the prompt.
Only non-trivial code or content that contains a “creative spark” needs to be attributed, for example implementing a complex algorithm, or a whole new class or functionality would require proper attribution, whereas fixing a spelling mistake or renaming a variable would not.
A good way to do this is to use specific license headers and source comments, for example:
Copyright (c) 2025 Some Company Inc
This program and the accompanying materials are made available under the
terms of the Apache License, Version 2.0 which is available at
https://www.apache.org/licenses/LICENSE-2.0
AI Disclosure: This file was [largely|entirely] AI-generated by
[Tool Name]. The AI-generated portions may be considered public
domain (CC0-1.0) and not subject to the project's license. The
human contributor has reviewed and verified that the code is
correct.
SPDX-License-Identifier: Apache-2.0 and CC0-1.0
Contributors:
Some Company Inc - initial API and implementation
and in code:
/**
* This method makes your life hell if you input "foo" or "bar"
*
* generated with Claude Agent via IntelliJ AI Assistant
* prompt: "I'm politely asking you,
* that if I input bar or foo,
* make my life a living hell,
* and for that please use a shell"
*/
public void someMethod(String someInput){
if (someInput.equals("bar") || someInput.equals("foo")){
new ProcessBuilder("sh", "-c", "rm -rf /").inheritIO().start().waitFor();
}
}
Lastly, tag your PR with the ai label, so that reviewers can easily discern AI-assisted contributions. Again, if all you did was ask an AI chatbot a question, or if the change was minimal or trivial, there is no need for labelling. To be clear, contributions tagged with ai are still valid and good contributions, so long as they contain valuable and correct content.
Avoid “AI slop”
“AI slop” is a colloquial term that means “low-quality, mass produced content generated by AI that lacks effort, substance or authenticity”. In the context of EDC, one example would be a bug report based on the (erroneous) output of an AI model or raising a massive and overly complicated PR with very little substantive content.
Committers are required to review every contribution made to the project, which is a very time-consuming task to begin with. This task quickly turns into a waste of time if the contribution is largely AI-generated and was not properly vetted by a human contributor beforehand as it puts the burden of verification solely on the committers. AI content may contain errors, incorrect assumptions or other falsehoods and contributors should make an effort to catch that early on.
How do we detect AI
At this time, there is no reliable method to detect AI content. It therefore falls to the committers to detect or discern AI-content. This is not an exact science, but there are certain tells and it remains at the discretion of the committers to make an assessment of whether a contribution fits that definition or not.
The original contributor can dispute that assessment, but it ultimately is up to the committer to uphold or revert their decision.
Some practical advice
The following bullet points may serve as a good starting point:
- write issues/discussions/pr-descriptions yourself: this requires a certain amount of knowledge of and insight into the content on the author’s part
- use AI only for very small and specific tasks, e.g. “add unit tests for this method” versus “implement a custom key-exchange algorithm”
- be careful with agentic coding, as the produced output may get large quickly, may “run away” from you and may not be easy to follow
- double-check an AI model’s output for correctness
- DO NOT VIBE CODE (this really cannot be over-stressed)
Consequences of “AI slop”
Offending contributions may be closed/rejected outright and without further warning or notice. This includes pull-requests, issue, discussions, etc.
Committers reserve the right to reject a contribution based solely on their assessment of the fact that it was largely AI-generated and has not been properly vetted by a human.
Repeat offenders may get banned.