Skip to content
213 changes: 213 additions & 0 deletions text/3950-ai-contribution-policy.md
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Context: https://github.com/rust-lang/rfcs/pull/3951#issuecomment-4286674950

Do we assume this RFC 3950 is already in effect? 🤔

Additionally, assuming this RFC is merged eventually, can it be retroactively applied to all open PRs and issues etc?

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most people would understand a new policy to apply prospectively, unless clearly stated otherwise.

Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
- Feature Name: N/A
- Start Date: 2026-03-13
- RFC PR: [rust-lang/rfcs#3950](https://github.com/rust-lang/rfcs/pull/3950)
- Issue: N/A

## Summary
[summary]: #summary

We adopt a Rust Project contribution policy for AI-generated work. This applies to all Project spaces.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a general rule, it would be nice to summarise the policy in the summary for the policy, and not simply summarise it as "there is a policy."

View changes since the review


## Motivation

In the Rust Project, we've seen an increase in unwanted and unhelpful contributions where contributors used generative AI. These are frustrating and costly to reviewers in the Project. We need to find ways to reduce the incidence of these and to lower the cost of handling them.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this policy meaningfully accomplish its goals? High-quality LLM-generated work, including from trusted contributors, still requires careful review. With an de-facto endorsement of LLM-generated contributions from trusted contributors, I worry that this will worsen review shortages on net.

View changes since the review


We hope that by stating our expectations clearly that fewer contributors will send us unhelpful things and more contributors will send us helpful ones. We hope that this policy will make decisions and communication less costly for reviewers and moderators.

## Policy design approach

People in the Rust Project have diverse — and in some cases, strongly opposed — views on generative AI and on its use. To address the problem in front of us, this policy describes only those items on which Project members agree.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fail to see how this addresses the problem if it only proposes that which team members agree upon. I think that it can improve the situation by proposing small wins, but not fix the problem.

View changes since the review


## Normative sections

[Normative sections]: #normative-sections

These sections are normative:

- [Contribution policy for AI-generated work]
- [Definitions, questions, and answers]
- [Normative sections]

Other sections are not normative.

## Contribution policy for AI-generated work
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These all seem like good rules to follow, but are all nearly impossible for reviewers or moderators to enforce.

"is prohibited" feels like the wrong framing here as a result. "Do not X" is much more compatible with rules of this nature. If you would like to pursue a permissive, anti-slop AI policy, a "we will teach people to contribute effectively and pro-socially using these tools" is a much better fit than "we will ban you based on our vibes of your initial submission".

Obviously contributors who are learning-resistant or aggressively spamming will need moderation, but that was true well before LLMs.

View changes since the review


[Contribution policy for AI-generated work]: #contribution-policy-for-ai-generated-work

In all Rust Project spaces:
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it's important, for the avoidance of doubt, to explicitly say "AI-generated contributions that follow this guidance are allowed by default." I believe that's the intended effect of this policy, but you really have to read between the lines to get there.

View changes since the review


- Submitting AI-generated work when you weren't in the loop is prohibited.
- Submitting AI-generated work when you haven't checked it with care is prohibited.
- Submitting AI-generated work when you don't have reason to believe you understand it is prohibited.
- Submitting AI-generated work when you can't explain it to a reviewer is prohibited.
- Feeding reviewer questions into an AI tool and proxying the output directly back is prohibited.
Comment thread
alice-i-cecile marked this conversation as resolved.
Comment on lines 39 to 45
Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we simplify the wording, and since this is a normative section, and use RFC 2119 terms, something like.:

Suggested change
In all Rust Project spaces, contributors MUST not:
* Submit AI-generated work without meaningful involvement in its creation.
* Submit AI-generated work that they have not carefully reviewed.
* Submit AI-generated work that they do not reasonably believe they understand.
* Submit AI-generated work that they cannot explain to a reviewer.
* Submit AI-generated responses to reviewer feedback without independent understanding.

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In all Rust Project spaces, contributors MUST not:

* Submit AI-generated work without meaningful involvement in its creation.

the suggested wording falls in the trap I explained in this thread, it doesn't account for submitting code that someone else wrote by responsibly using AI.


## Definitions, questions, and answers
Comment thread
alice-i-cecile marked this conversation as resolved.
Outdated
Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Q&A shouldn't be a normative section of the policy.

View changes since the review

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for raising this item. There are two sections that have answers. One is focused on the rationale of the policy itself and is not normative.

The other contains definitions of the terms used in the policy items and specific guidance on how the policy items are to be interpreted. The policy comprises these definitions and this guidance, so this section is normative.

If there are specific items that you do not believe should be normative, I'd be curious to hear which and the reasons for that.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current revision, the section "Definitions, questions, and answers" is marked as normative. This conflates distinct types of material that should be treated differently.

Copy link
Copy Markdown
Contributor

@apiraino apiraino Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also raised this point (see comment) and here Travis' answer. I am not sure that adding more meta-content to the Q&A helps a lot.

The additions that you @traviscross drafted afterwards are good and make some fundamental points clear. As a reader I would just expect a policy to give me the "READ THIS PART!!111!!ONE!" more prominently :)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I have a revision in progress.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would also be helpful if the normative parts could be redrafted with RFC 2119 in mind.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This conflates distinct types of material that should be treated differently.

Thanks for raising this. I've now separated these into Definitions, Applying the policy, and Guidance. Each section now starts with a description of the nature of that section and its normative effect.

There is some balance being applied here. Some of the guidance items both guide reviewers and contributors and have policy effect. If this were a legal document, I might separate things out further. It's not, so I'm prioritizing the narrative flow and avoidance of duplication by keeping these together.

The additions that you... drafted afterwards are good and make some fundamental points clear. As a reader I would just expect a policy to give me the "READ THIS PART!!111!!ONE!" more prominently :)

Thanks. I hear this. I'm hopeful that the upfront Normative sections section — which links to each normative section — and the new descriptions at the top of each section will help with this. I've also moved a couple of the items that fell more on the meta side out of the normative part.

It would also be helpful if the normative parts could be redrafted with RFC 2119 in mind.

I hear you, and at the same time, we don't generally write Rust RFCs in IETF RFC 2119-style. The spirit of that RFC is that it's important to be clear about what has normative force. It uses certain keywords for that (e.g., "MUST"). In this RFC, that work is done by phrases such as "is prohibited". While I'll keep the suggestion to use those keywords in mind, I'm not planning that revision at this time.


[Definitions, questions, and answers]: #definitions-questions-and-answers

### What is AI-generated work?

Work is AI-generated when agentic or generative machine-learning tools are used to directly create the work.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this definition woefully insufficient. It does not define any of the following:

  • Work
  • AI
  • Generated
  • Agentic
  • Generative
  • Machine-learning
  • Tool
  • Directly
  • Create

And you might find this excessively nitpicky, but, no:

  • Does work only mean actual contributions, like code and documentation, or metadata like comments and descriptions?
  • What is the threshold for AI? LLMs? Is machine translation included? Does clippy count as an AI, and what about code it produces for me? These questions range from pointless to very relevant.
  • So, if I just ask ChatGPT a question, is it being "agentic"?
  • Back to machine translation, does that count as "generative"? It's just translating.
  • What qualifies as machine-learning? Is code fuzzing an act of ML?
  • What constitutes a tool, exactly? That's not a very nice way to talk about Claude, a real person.
  • So, indirectly creating is okay?
  • So, as long as it's not created, but rewritten, that's okay?

It does seem that a lot of these questions are answered later in the RFC, but then why aren't they answered here? I even go over this: we should be reducing the burden on maintainers and contributors by making a policy that's easy to understand. And that means explaining things when relevant, even if redundant.

View changes since the review


### What's it mean to be in the loop?

To be in the loop means to be part of the discussion — to be an integral part of the creative back and forth. You were in the loop if you were there, engaged, and contributing meaningfully when the creation happened.
Copy link
Copy Markdown
Member

@programmerjake programmerjake Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this unintentionally prohibit things that were created by someone other than the contributor and that someone else used AI tools but actually verified the output? Maybe we should add something saying that this should not be interpreted as prohibiting contributing code that you weren't directly involved in creating because some other person responsibly created it maybe with AI.

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Contributing someone else's code would seem weird to me. Why isn't that other person the one submitting the PR? Surely they have a better understanding of the code and would be better placed to respond to feedback?

Copy link
Copy Markdown
Contributor Author

@traviscross traviscross Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this unintentionally prohibit things that were created by someone other than the contributor and that someone else used AI tools but actually verified the output? Maybe we should add something saying that this should not be interpreted as prohibiting contributing code that you weren't directly involved in creating because some other person responsibly created it maybe with AI.

That's a good point. In earlier proposals, I had been carefully trying to avoid tripping over disallowing things otherwise allowed by the DCO. Tripped over that here. Thanks for catching this.

I'm not immediately certain how to draft around this. How could one be sure, when one's contribution contains (appropriately-licensed) open source code written by a third party acting at arms-length that the third-party author created the code in a way that complies with the policy? Maybe we'll have to exempt that but include the arms-length restriction so that it's not just an easy loophole. But that's getting a bit legalistic. Will think about this. Let me know if you have ideas.

Copy link
Copy Markdown
Member

@programmerjake programmerjake Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Contributing someone else's code would seem weird to me. Why isn't that other person the one submitting the PR? Surely they have a better understanding of the code and would be better placed to respond to feedback?

well, you might be contributing code you copied from somewhere because the other person wrote it a long time ago and/or isn't interested in contributing themselves. a good example is if there's some handy method in a library somewhere that you need but you don't want a whole new dependency just for that method, e.g. promoting itertools methods to std

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we'll have to exempt that but include the arms-length restriction so that it's not just an easy loophole

to exploit that loophole would require multiple people collaborating which seems rare enough in combination with fully AI generated code that maybe we can just leave the loophole and do something if/when it becomes a problem?

maybe just add a sentence like so: This does not mean you can't contribute code written by other people if it has the proper open-source licenses.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Contributing someone else's code would seem weird to me. Why isn't that other person the one submitting the PR?

This is fairly common when dealing with abandoned, salvaged PRs.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It kind of seems like this scenario is a more general instance of submitting AI assisted work. When you submit a PR you are vouching that you understand it and believe it’s of suitable quality and suitably licensed for inclusion in the Rust project. Whether that was done using generative AI or by mining 30 year old repos for helpful open source code.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but also you can usually trust a widely used library to have generally correct code, so it doesn't need as thorough a review as the output of a LLM, e.g. if serde was being added to std I don't think the PR author needs to review all of serde's code in fine detail in order for them to be a responsible PR author, but if there was an equivalent amount of code written with a LLM for a PR I'd expect them to have to review it all in fine detail (actually that's big enough that I'd expect it to be split into many PRs for easier reviewing).


### What's it mean to check something with care?

To check something with care means to treat its correctness as important to you. It means to assume that you're the last line of defense and that nobody else will catch your mistakes. It means to give it your full attention — the way you would pack a parachute that you're about to wear.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot emphasise enough how patronising this sounds. I do not think that we should seriously consider a policy that thinks that we need to define what it means to care, but not what "creating work" means, on a policy about caring about the work you create.

View changes since the review


### What's it mean to have reason to believe you understand something?

To understand something means that you have a correct mental model of what that thing is, what its purpose is, what it's doing, and how it works. This is more than we expect. You're allowed to be wrong.

But you must have *reason* to believe that you understand it. You must have put in the work to have a mental model and a personal theory of why that model is correct.

It's not enough to just have heard a theory. If you can close your eyes and map the thing out and why the thing is correct — in a way that you believe and would bet on — then you have reason to believe you understand it.
Comment on lines +69 to +73
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hate going line-by-line but this is section shows a fundamental misunderstanding about how policy should be written.

You seem to have the impression that policy is based upon rigid definitions. But this could not be further from the truth. Let me share an example I had previously included on the other RFC, but which was deleted/redacted:

Wikipedia has an excellent essay on this, WP:BEANS:

The little boy's mother was going off to the market. She worried about her son, who was always up to some mischief. She sternly admonished him, "Be good. Don't get into trouble. Don't eat all the chocolate. Don't spill all the milk. Don't throw stones at the cow. Don't fall down the well." The boy had done all of these things on previous market days. Hoping to head off new trouble, she added, "And don't stuff beans up your nose!" This was a new idea for the boy, who promptly tried it out.

The actual policy for this is WP:OPAQUE, which points out that there are a lot of valid reasons for not explaining certain violations of the rules. There are so many different facets to this, like how one of the things that people who explicitly break the rules want is acknowledgement, and not giving them that lets them know that their behaviour is unacceptable. Additionally, giving people ideas about all the cool new fun ways they can break the rules is a bad idea, as mentioned above.


The ruder quip to respond to this is, so, as long as I'm clueless, it's okay?

The point here is that it does not matter whether a person genuinely believes whether they're being reasonable. The point is to define what reasonable is, what the consequences are for being unreasonable are, and why we think that's reasonable. I've mentioned elsewhere that I've written my own (currently unshared) version of a policy, and let me just quote to explain:

In all cases, maintainers have broad authority to reject changes if a contributor does not fully understand the code they wrote, although this depends heavily on the situation and whether they "should" have known this. For example, if you're trying to figure out a weird Windows bug that only occurs on certain CPUs on Tuesdays, you're excused for just trying things and seeing if they work. If you're rewriting code to increase performance, however, you're expected to understand why the result is an improvement, or at least have data to prove it.

Here, I cover all these cases:

  • Omitted from here, I earlier point out that the goal is to reduce the burden on maintainers. This is the goal.
  • What's defined as reasonable here is given with examples: for rare situations where understanding is difficult, you are not expected to be an expert, but for situations which are understandable, you are expected to understand. It may be difficult to justify performance gains but that's the kind of effort we'd like you to put in. Similarly, it is difficult to understand all bugs and maybe we can accept a few hits and misses.
  • And, as mentioned, the issue here is that if you don't understand, we might reject your changes, to reduce that burden.

Note that this is arguably as vague as what you've proposed, but with the noticeable difference that it's explicitly on the reasons for policy, not the mechanisms of it. Using the same example I mentioned, it is entirely within the bounds of this policy for someone providing "performance improvements" to think they understand the mechanism of it and to be totally wrong, and have put in insufficient work to prove it. And I personally think that person should be punished with the very reasonable punishment of just turning them away and asking for them to put more work into it.

Right now I have no idea what this section's purpose even is. What does it mean to have a reason to believe you understand something: it means to have a reason to believe.

View changes since the review


### What's it mean to be able to explain something to a reviewer?

Reviewers need to build a mental model of their own. They may want to know about yours in order to help them. You need to be able to articulate your mental model and the reasons you believe that model to be correct.

### What's it mean to proxy output directly back to a reviewer?
Copy link
Copy Markdown
Contributor

@eholk eholk Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe less of an issue, but do we also want to say reviewers can’t just proxy LLM generated review comments back to the code author?

View changes since the review


Reviewers want to have a discussion with you, not with a tool. They want to probe your mental model. When a reviewer asks you questions, we need the answers to come from you. If they come from a tool instead, then you're just a proxy.

### Does this policy ban vibecoding?

This policy bans vibecoding. Andrej Karpathy, who originated the term, [described](https://x.com/karpathy/status/1886192184808149383) *vibecoding* as:

> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists... I "Accept All" always[;] I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment [—] usually that fixes it. The code grows beyond my usual comprehension... Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away... [I]t's not really coding — I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

If you didn't read the diffs, then you can't have checked the work with care, you can't have reason to believe you understand it, and you're not in a position to explain it to a reviewer without feeding the questions to the tool and proxying the output back. If it's grown beyond your comprehension, then even reading the diffs won't help — you don't understand it, won't be able to explain it, and can't say you've checked it with care.

Violating even one of these policy items is enough to violate the policy.

<div id="does-this-policy-ban-slop"></div>

### Does this policy ban AI slop?

This policy goes further than banning AI slop. *AI slop* is unwanted, **low-quality**, AI-generated work. This policy does not consider the quality of the work. High-quality AI-generated work is still prohibited if it fails any item in the policy — e.g., because it was *vibecoded*. If you weren't in the loop, didn't check the work with care, don't have reason to believe you understand it, or can't explain it to a reviewer, then the contribution is prohibited — regardless of the quality of the work.

### Does this policy ban fully automated AI-generated contributions?

This policy bans fully automated AI-generated contributions. These are the worst of the unwanted contributions that have come our way, and each item in the policy independently bans these.

If you created the work in a fully automated way, then you weren't in the loop, you can't have checked it with care, you can't have reason to believe you understand it, and you're not in a position to explain it to a reviewer without feeding the questions to the tool and proxying the output back.

Violating even one of these policy items is enough to violate the policy.

### When contributions appear to fall short of this policy, what do reviewers do?

Reviewers may reject any contribution that falls short of this policy without detailed explanation. Simply link to the policy or paste this template:

> On initial review, unfortunately, this contribution appears to be an AI-generated work that falls short of one or more of our policies:
>
> - Submitting AI-generated work when you weren't in the loop is prohibited.
> - Submitting AI-generated work when you haven't checked it with care is prohibited.
> - Submitting AI-generated work when you don't have reason to believe you understand it is prohibited.
> - Submitting AI-generated work when you can't explain it to a reviewer is prohibited.
> - Feeding reviewer questions into an AI tool and proxying the output directly back is prohibited.
>
> For details, see [RFC 3950](https://github.com/rust-lang/rfcs/pull/3950).
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels unprecedented to link an RFC for the policy. I would expect this to find its own place on the website by the COC at least, and I think it would be okay to link there for more info and background.

View changes since the review

>
> We will not be reviewing this work further.
>
> While we trust that you intended to be helpful in making this contribution, these contributions do not help us. Reviewing contributions requires a lot of time and energy. Contributions such as this do not deliver enough value to justify that cost.
>
> We know this may be disappointing to hear. We're sorry about that. It pains us to reject contributions and potentially turn away well-meaning contributors. For next steps you can take, please see:
>
> - *[What should I do if my contribution was rejected under this policy?](https://github.com/rust-lang/rfcs/blob/TC/ai-contribution-policy/text/3950-ai-contribution-policy.md#what-should-i-do-if-my-contribution-was-rejected-under-this-policy)*

### Should reviewers investigate to determine if AI tools were used?

There's no need to investigate to determine if AI tools were used. If the contribution seems on its face to fall short, then just reject the contribution, link to the policy or paste the template, and, at your discretion, notify the moderators.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section asks more questions than it answers. Okay, there's no need to investigate, so does that mean there's going to be a lot of false positives? Does that mean that there will be a lot of cases where things are let through?

To be clear, I think that these questions are not necessarily the right ones to ask, from a policy perspective. But they're the ones begged by this statement.

But, again, simply explaining the mechanism of the policy, but not the reason, begs you to find the exact boundaries of the policy intentionally, either to exploit it or just because you genuinely don't know where the lines are.

View changes since the review


### What should I do if my contribution was rejected under this policy?

If your contribution was rejected under this policy, first, step back and honestly evaluate whether your contribution did in fact fall short. We appreciate people who are honest with themselves about this. If your contribution failed even one of the policy items above — in letter or spirit — then it fell short of this policy.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quite honestly, from the perspective of someone being told they've broken a policy, this really, really does not instill confidence in the people enforcing it.

It very much reads as "if you broke the policy, well, think really hard about what you've done." It doesn't matter if someone did wrong; it just feels very rude and accusatory to just tell them that the best thing they can do is accept they were wrong and reflect, because, that's ultimately the implicit, fallback advice.

The real question is, should they have been rejected? Should the policy be changed? And this seeks to answer none of those questions or to justify the policy at all. And that's one of the many reasons why I dislike it; it just assumes from the beginning that this is the correct policy, and that if you're asking why, then you're asking the wrong question.

Like, we should not just start from the position that our policies are right. We should be confident in why and not afraid to explain it.

View changes since the review


If your contribution fell short, reflect on what you could do better. We need contributors who put heart into their contributions — not just point a tool at our repositories. If you do want to contribute, then put a lot of care and attention into your next contribution. If you've already been banned, then reach out to the moderation team and talk about what you've learned and why you want to contribute.

If you're sure that your contribution didn't fall short but you're a new contributor, see the next item. As a new contributor, it's difficult to use these tools in a way that won't appear to reviewers as falling short. We encourage you to try again without using generative AI tools, especially for assisting in creation (rather than learning).

In other cases, please understand that we will sometimes make mistakes. Explain concisely why you believe the contribution to be correct and compatible with this policy; someone will have a look.

### As a new contributor, is it OK to use AI tools?

This policy does not prohibit anyone from using AI tools. But as a new contributor, it's a good practice to first contribute without using generative AI tools, especially for assisting in creation (rather than learning). Using these tools correctly is difficult without a firm baseline understanding. Without this understanding, it's easy to use these tools in a way that will fall short (or appear to reviewers as falling short) of this policy.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a couple of issues with this.

First, it seems to conflate new contributors with beginners of contributing, which I think is a false dichotomy. People who are new contributors to the rust project are not necessarily clueless individuals who've never programmed before. Some of them are, and we're explicitly inclusive of those people, which is why we have plenty of beginner's guides.

But I think that this approaches the perspective from the idea that people are not only clueless about using these tools, but clueless of their effects on the projects around them, and I don't really like that assumption.

I think that this could potentially be worded without that connotation while still being effective at what it's doing. But then, it would have to do what I have previously described, explaining the reasons for things.

To me, a big issue is that people are unaware of the burden this puts on maintainers. Explaining this to them helps them understand why we have policies against it: even if they do make good changes, they're harder to review, and so we ask them to put in the extra effort instead of us.

This, does not do that at all. If anything, it just tries to convince these people that their issue is just that they don't understand. And, well, I think that simply telling someone they don't understand is rude without explaining to them what they should understand.

View changes since the review


### What if I follow the policy but my work sounds like the output of an LLM?

This policy does not prohibit work — that otherwise complies with the policy — from merely *sounding like* the output of an LLM. But keep in mind that we want to hear from you, not from a tool, so we encourage you to speak in your own voice. A contribution that sounds like it came from an LLM will, in practice, have a higher risk of being rejected — as a false positive — by a reviewer, even if it complies with this policy.
Comment thread
kennytm marked this conversation as resolved.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like this sentence because, well, we explicitly have set out in the policy that violations are not verified.

It just doesn't feel like the project genuinely is trying to solve an issue if the answer to "I'm getting accused of having an unauthentic voice" is "well, sorry!"

I don't think that we should be pushing away contributors for their tone of voice. I think we should be having an honest discussion back and forth about what we want to see from contributors. Just turning away people for what LLMs sound like feels rude.

View changes since the review


### What happens to me if my contributions are rejected under this policy?

If your contributions are rejected under this policy and reported to the moderators, the moderators will decide on appropriate next steps that could be as severe as banning you from the Project and all of its spaces. The moderators will consider the details of each situation when deciding on these next steps. While this RFC defines what is prohibited, it leaves the handling of violations fully to the discretion of the moderators.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of feels unprecedented for a policy. Sure, the moderators have discretion in their actions, but the point of an RFC is to describe what the policy is, so refusing to even provide a suggestion feels kind of… pointless?

Like, there are genuine points to be discussed here: should we just say that people's PRs are closed, no harm done, or should they have more serious restrictions? What affects these? While I know the mods will explicitly delineate this, I think that setting expectations in a policy is a crucial aspect of the policy, even if things can change based upon circumstance.

View changes since the review


### Does this apply to PRs, issues, proposals, comments, etc.?

This policy applies to pull requests, issues, proposals in all forms, comments in all places, and all other means of contributing to the Rust Project.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This really shows why a policy needs to be written, not just a hastily-put-together bulleted-list.

This is effectively one of the first definitions that should be present in the policy, and instead it's relegated to a Q&A at the bottom. While I think that having redundant Q&A is fine, I think that the fact that this policy is so segmented into underspecified sections makes it feel like the policy is posing a riddle to me to solve.

When like. The entire point is about reducing the burden for maintainers! If the policy is a burden to read it's caring about the burden on maintainers but not the burden on the contributors at all.

And, inevitably, it creates a burden too on maintainers since they're the ones who have to enforce it, and they need to understand it too.

View changes since the review


### By not banning use of AI tools, does this RFC endorse them?

By not banning use of AI tools, this RFC does not endorse their use. People in the Project have diverse views on generative AI and on its use. This RFC takes no position — positive or negative — on the use of these tools beyond forbidding those things the policy prohibits.
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By not banning use of AI tools, this RFC does not endorse their use. People in the Project have diverse views on generative AI and on its use. This RFC takes no position — positive or negative — on the use of these tools beyond forbidding those things the policy prohibits.
Although use of AI tools is not banned, this RFC does not endorse their use. People in the Project have diverse views on generative AI and on its use. This RFC takes no position — positive or negative — on the use of these tools beyond forbidding those things the policy prohibits.

View changes since the review


### Is this the final policy for contributions or for AI-assisted contributions?

This policy is intended to solve the problem in front of us. The world is moving quickly at the moment, and Project members are continuing to explore, investigate, learn, and discuss. Other policies may be adopted later, and this RFC intends to be easy for other policies — of any nature — to build on.

### Does this policy require disclosure of the use of generative AI tools?

This policy does not require disclosure of the use of generative AI tools. This is a complex question on which Project members have diverse views and where members are continuing to explore, investigate, learn, and discuss. Later policies may further address this.
Copy link
Copy Markdown
Contributor

@juntyr juntyr Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that disclosure of authorship should be required (which can go beyond AI, e.g. to acknowledge co-authors). When reviewing student work, I have found it very helpful to have a clear statement of whether AI was involved or not, since it reduces the guessing game in many cases. If someone falsely declares to have not used AI, it can also simplify moderation choices. While I would understand if a specific policy on disclosure is postponed so that the larger policy can be agreed upon more quickly, I do think disclosure should follow soon after.

View changes since the review

Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.

Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that disclosure is a vital part of verifying whether work actually involved LLMs or was just someone's own effort. Sure, people can lie, but then we can litigate honesty instead of just this vague… was tool involved? Dunno, can't guess, but won't ask either.

One thing that kind of shocked me when doing research into existing policies (which this policy did not do), was that disclosure was required across the board by projects with well-defined policies, and it didn't matter at all what the project's views on LLMs were. It was only the super underspecified policies that didn't include disclosure, and didn't even offer it as a suggestion.

I don't think that disclosure is controversial. I think that a lot of people think it helps even if they like LLMs; it lets you know what you're working on. And, similarly, I think that we can create a disclosure policy that doesn't punish people harshly if they forget.

It just feels like a misunderstanding of the situation to say that disclosure is controversial.

View changes since the review


### Can teams adopt other policies?

This RFC adopts a policy for shared Project spaces and a baseline policy for all team spaces. It does not restrict any team from adopting policies for its own spaces that add prohibitions.

At the same time, there is a cost to having different policies across the Project: it risks surprise and confusion for contributors. By adopting a policy that represents those items on which we have wide agreement and that addresses the concrete problems we're seeing across the Project, we hope to create less need for custom policies and more certainty for contributors.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Genuinely not sure what this sentence is trying to accomplish. Like, this is kind of thing you put in the drawbacks section of an RFC when there aren't really any drawbacks and you have to include the obvious ones so it's not empty.

Like, yes, it's obviously a drawback of multiple policies that they need to be concurrently followed and that's just a lot to take in. But, I think it's more important to discuss what's being done about it, than to just say it's bad to have multiple policies.

Like, an active compromise being made on this policy is to leave a lot of gaps for potential team-specific policy to fill in. That leads to the drawbacks listed here, thus, it should probably be justified. I don't really see any justification, just a vague reminder that having too much to read can be confusing.

View changes since the review


### What about public communications?

This RFC does not have any policy items focused on the public communications of the Project. But proposals for Project communications are contributions and must follow this policy. Later policies may further address this.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is genuinely confusing to me. So, like, a blog post isn't included in this policy, but a PR for the blog is? And all "comments" on the project are, so, that kinda includes the public ones?

Like, genuinely more confused by this after reading it.

View changes since the review


### Does this policy make a distinction between new and existing contributors?

New and existing contributors are treated in the same way under this policy. All contributors — including all Project members — may only make contributions that are compatible with this policy.

At the same time, new contributors face additional challenges in using generative AI tools to produce contributions that reviewers will recognize as compatible with this policy. It's a good practice for new contributors to first work without using generative AI tools, especially for assisting in creation (rather than learning), to build the baseline understanding required.

## Other questions and answers

### Does accepting AI-generated work risk our ability to redistribute Rust?

What about the copyright situation? Since this policy does not ban AI-generated work, does that risk our ability to redistribute Rust under our license? Niko Matsakis [reports](https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html#the-legality-of-ai-usage):

> On this topic, the Rust Project Directors consulted the Rust Foundation's legal counsel and they did not have significant concerns about Rust accepting LLM-generated code from a legal perspective. Some courts have found that AI-generated code is not subject to copyright and it's expected that others will follow suit. Any human-contributed original expression would be owned by the human author, but if that author is the contributor (or the modifications are licensed under an open source license), the situation is no different from any human-origin contribution. However, this does not present a legal obstacle to us redistributing the code, because, as this code is not copyrighted, it can be freely redistributed. Further, while it is possible for LLMs to generate code (especially small portions) that is identical to code in the training data, outstanding litigation has not revealed that this is a significant issue, and often such portions are too small or contain such limited originality that they may not qualify for copyright protection.

Comment on lines +195 to +200
Copy link
Copy Markdown

@miikkas miikkas Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, the following statements give the impression that the Rust project is ok with taking someone's work (for example, copyrighted under AGPL), laundering its license using an LLM, and then distributing it as a part of the compiler or standard library:

Some courts have found that AI-generated code is not subject to copyright and it's expected that others will follow suit.

However, this does not present a legal obstacle to us redistributing the code, because, as this code is not copyrighted, it can be freely redistributed.

Further, while it is possible for LLMs to generate code (especially small portions) that is identical to code in the training data, outstanding litigation has not revealed that this is a significant issue, and often such portions are too small or contain such limited originality that they may not qualify for copyright protection.

It seems to me that the RFC is taking a very controversial stand here in quite strong wording.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: the link gives a 404.

Copy link
Copy Markdown
Contributor

@Diggsey Diggsey Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph does not in any way imply that taking someone's copyrighted work and relicensing it is acceptable. It says that the danger of this happening accidentally through LLM use is not a significant risk because LLMs don't tend to reproduce large enough pieces of code to be copyrightable, unless explicitly prompted to by the user.

If it's happening intentionally, then that is no different from a user submitting copyrighted code without the LLM as a middleman (ie. that's already a risk that all open source projects take on) and so the use of an LLM is irrelevant.

Some courts have found that AI-generated code is not subject to copyright and it's expected that others will follow suit.

I think you may be confused by this statement: it's not saying that AI cannot output code which is subject to copyright. It's saying that new code generated by the AI is not subject to copyright (ie. the AI cannot hold copyright over something)

### Is requiring that contributors take care an acceptable policy item?
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Is being nice to each other something we should actually encode into our policy?" is a ridiculous question to ask when we have a code of conduct that explicitly encourages being nice to each other.

It is already policy to be nice to each other. Asking whether we should encode that in policy is a ridiculous question to even pose.

And no, I don't think "contributors taking care" is in any way distinguishable from "people being nice to each other"; the main issue is that this policy itself does nothing to draw this distinction. The main motivation for the policy was people reducing the burden on maintainers, and a big issue is that many people are unaware of the burden they create. Instead of pointing this out, the RFC just tells people they need to "take care" and tries to justify that in and of itself, instead of pointing out the real problem.

It's not a negative value judgement to tell people that they're being burdensome. In fact, it's respectful, because people like to know if they're doing something wrong so they can fix it.

If I were being cynical, and forgive me for being so, I would say that this RFC doesn't want to even imply that some LLM users might be burdensome simply for their LLM usage, when this is well-known by basically everyone on all sides of the for-against-LLMs argument. This tech gives you an unprecedented ability to put in a little amount of work and make a lot of work for someone else. This is a quality many tools have.

I have no idea which things I've actually shared at this point, but at some point, I decided to use this analogy:

Ultimately, this policy is trying to find the most diplomatic possible wording of "hey, I hate to break it to you, but leaving a massive pile of dirt on someone's desk is not nice, even if you haven't sifted the dirt and there's a 1% chance of there being gold inside." This, by most people's interpretation, would fit right in line with the code of conduct, but since slop contributions are not only so prevalent but so poorly misunderstood, it's important to clarify this particular point.

In this analogy, a shovel is the tool that allows a little work for you to create a lot of work for someone else. Does that mean that all shovels are bad? No, it just means that we don't allow unrestricted shovel use in public.

And that's not even a made-up analogy! It's just true.

View changes since the review


To take care is to give something your full attention and treat its correctness as important to you. That's a meaningful distinction. As reviewers, we can tell when someone has taken care and when the person has not — there are many signs of this.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a seasoned reviewer, I am very skeptical of the claim that reviewers can reliably tell when people have or have not taken care, especially in the context of LLM-assisted work.

View changes since the review


At the same time, taking care is just one requirement of the policy. If a contribution is prohibited by any item in the policy, then it's prohibited by the policy. A contribution may be rejected under this policy even if we cannot tell whether the person took care.

### Is requiring that contributors have reason to believe they understand an acceptable policy item?

Even the best contributors may sometimes misunderstand their own contributions. We do not require that people actually understand the things they submit. But we expect contributors to have *good reason* to expect that they understand what they're submitting to us. This is reasonable to ask, and it's a prerequisite for a contributor being able to explain the contribution to a reviewer and have a productive conversation.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wording makes me even less confident in the original discussion about understanding things.

I think we should require people to understand the things they submit, under reasonable circumstances. Again, from my own words, the quote:

In all cases, maintainers have broad authority to reject changes if a contributor does not fully understand the code they wrote, although this depends heavily on the situation and whether they "should" have known this. For example, if you're trying to figure out a weird Windows bug that only occurs on certain CPUs on Tuesdays, you're excused for just trying things and seeing if they work. If you're rewriting code to increase performance, however, you're expected to understand why the result is an improvement, or at least have data to prove it.

I think it's completely reasonable to state that people should understand things and not just think they understand them. But I also think we should be understanding of well-intentioned people who thought they understood, but didn't.

This is why policies should focus on their reasons for existing, and not to just come up with some convoluted mechanics to justify those reasons without stating them. You get weird situations like this where, again, the less favourable response to this argument is "it's okay to be clueless."

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clearly stating the reasons for a policy's existence is extremely useful for both moderators and those who must follow policies to follow them accurately. It really helps model the spirit of the law.


At the same time, having reason to believe that one understands the contribution is just one requirement of the policy. If a contribution is prohibited by any item in the policy, then it's prohibited by the policy. A contribution may be rejected under this policy even if we cannot tell whether the person had good reason for that belief.

### Should the policy require care and attention proportional to that required of reviewers?

An earlier version of the draft that became this RFC stated:

> Submitting AI-generated work without exercising care and attention proportional to what you're asking of reviewers is prohibited.

Is that needed? In drafting this RFC, it came to feel redundant. In explaining what it means to check work carefully, we say that this means to check something with care, to treat its correctness as important to you, and to give it your full attention. That's exactly what it means to exercise care and attention proportional to what's being asked of a reviewer.

## Acknowledgments

Thanks to Jieyou Xu for fruitful collaboration on earlier policy drafts. Thanks to Niko Matsakis, Eric Huss, Tyler Mandry, Oliver Scherer, Jakub Beránek, Rémy Rakic, Pete LeVasseur, Eric Holk, Yosh Wuyts, David Wood, Jack Huey, Jacob Finkelman, and many others for thoughtful discussion.

All views and errors remain those of the author alone.
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Genuinely respect you including this. No caveats.

View changes since the review