-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Add contribution policy for AI-generated work #3950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 3 commits
aa12dbc
8fc3f7a
4d42fd9
e4e2100
9cd03df
847e2c5
edd5d3b
cffd56a
e6a0265
a935716
99983f4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,183 @@ | ||||||||||||||||||
| - Feature Name: N/A | ||||||||||||||||||
| - Start Date: 2026-03-13 | ||||||||||||||||||
| - RFC PR: [rust-lang/rfcs#3950](https://github.com/rust-lang/rfcs/pull/3950) | ||||||||||||||||||
| - Issue: N/A | ||||||||||||||||||
|
|
||||||||||||||||||
| ## Summary | ||||||||||||||||||
| [summary]: #summary | ||||||||||||||||||
|
|
||||||||||||||||||
| We adopt a Rust Project contribution policy for AI-generated work. This applies to all Project spaces. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As a general rule, it would be nice to summarise the policy in the summary for the policy, and not simply summarise it as "there is a policy." |
||||||||||||||||||
|
|
||||||||||||||||||
| ## Motivation | ||||||||||||||||||
|
|
||||||||||||||||||
| In the Rust Project, we've seen an increase in unwanted and unhelpful contributions where contributors used generative AI. These are frustrating and costly to reviewers in the Project. We need to find ways to reduce the incidence of these and to lower the cost of handling them. | ||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Will this policy meaningfully accomplish its goals? High-quality LLM-generated work, including from trusted contributors, still requires careful review. With an de-facto endorsement of LLM-generated contributions from trusted contributors, I worry that this will worsen review shortages on net. |
||||||||||||||||||
|
|
||||||||||||||||||
| We hope that by stating our expectations clearly that fewer contributors will send us unhelpful things and more contributors will send us helpful ones. We hope that this policy will make decisions and communication less costly for reviewers and moderators. | ||||||||||||||||||
|
|
||||||||||||||||||
| ## Policy design approach | ||||||||||||||||||
|
|
||||||||||||||||||
| People in the Rust Project have diverse — and in some cases, strongly opposed — views on generative AI and on its use. To address the problem in front of us, this policy describes only those items on which Project members agree. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I fail to see how this addresses the problem if it only proposes that which team members agree upon. I think that it can improve the situation by proposing small wins, but not fix the problem. |
||||||||||||||||||
|
|
||||||||||||||||||
| ## Normative sections | ||||||||||||||||||
|
|
||||||||||||||||||
| [Normative sections]: #normative-sections | ||||||||||||||||||
|
|
||||||||||||||||||
| These sections are normative: | ||||||||||||||||||
|
|
||||||||||||||||||
| - [Contribution policy for AI-generated work] | ||||||||||||||||||
| - [Definitions, questions, and answers] | ||||||||||||||||||
| - [Normative sections] | ||||||||||||||||||
|
|
||||||||||||||||||
| Other sections are not normative. | ||||||||||||||||||
|
|
||||||||||||||||||
| ## Contribution policy for AI-generated work | ||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These all seem like good rules to follow, but are all nearly impossible for reviewers or moderators to enforce. "is prohibited" feels like the wrong framing here as a result. "Do not X" is much more compatible with rules of this nature. If you would like to pursue a permissive, anti-slop AI policy, a "we will teach people to contribute effectively and pro-socially using these tools" is a much better fit than "we will ban you based on our vibes of your initial submission". Obviously contributors who are learning-resistant or aggressively spamming will need moderation, but that was true well before LLMs. |
||||||||||||||||||
|
|
||||||||||||||||||
| [Contribution policy for AI-generated work]: #contribution-policy-for-ai-generated-work | ||||||||||||||||||
|
|
||||||||||||||||||
| In all Rust Project spaces: | ||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. IMO it's important, for the avoidance of doubt, to explicitly say "AI-generated contributions that follow this guidance are allowed by default." I believe that's the intended effect of this policy, but you really have to read between the lines to get there. |
||||||||||||||||||
|
|
||||||||||||||||||
| - Submitting AI-generated work when you weren't in the loop is prohibited. | ||||||||||||||||||
| - Submitting AI-generated work when you haven't checked it with care is prohibited. | ||||||||||||||||||
| - Submitting AI-generated work when you don't have reason to believe you understand it is prohibited. | ||||||||||||||||||
| - Submitting AI-generated work when you can't explain it to a reviewer is prohibited. | ||||||||||||||||||
| - Feeding reviewer questions into an AI tool and proxying the output directly back is prohibited. | ||||||||||||||||||
|
alice-i-cecile marked this conversation as resolved.
Comment on lines
39
to
45
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we simplify the wording, and since this is a normative section, and use RFC 2119 terms, something like.:
Suggested change
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
the suggested wording falls in the trap I explained in this thread, it doesn't account for submitting code that someone else wrote by responsibly using AI. |
||||||||||||||||||
|
|
||||||||||||||||||
| ## Definitions, questions, and answers | ||||||||||||||||||
|
alice-i-cecile marked this conversation as resolved.
Outdated
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think Q&A shouldn't be a normative section of the policy.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for raising this item. There are two sections that have answers. One is focused on the rationale of the policy itself and is not normative. The other contains definitions of the terms used in the policy items and specific guidance on how the policy items are to be interpreted. The policy comprises these definitions and this guidance, so this section is normative. If there are specific items that you do not believe should be normative, I'd be curious to hear which and the reasons for that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In the current revision, the section "Definitions, questions, and answers" is marked as normative. This conflates distinct types of material that should be treated differently.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I also raised this point (see comment) and here Travis' answer. I am not sure that adding more meta-content to the Q&A helps a lot. The additions that you @traviscross drafted afterwards are good and make some fundamental points clear. As a reader I would just expect a policy to give me the "READ THIS PART!!111!!ONE!" more prominently :)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks. I have a revision in progress. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would also be helpful if the normative parts could be redrafted with RFC 2119 in mind.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Thanks for raising this. I've now separated these into Definitions, Applying the policy, and Guidance. Each section now starts with a description of the nature of that section and its normative effect. There is some balance being applied here. Some of the guidance items both guide reviewers and contributors and have policy effect. If this were a legal document, I might separate things out further. It's not, so I'm prioritizing the narrative flow and avoidance of duplication by keeping these together.
Thanks. I hear this. I'm hopeful that the upfront Normative sections section — which links to each normative section — and the new descriptions at the top of each section will help with this. I've also moved a couple of the items that fell more on the meta side out of the normative part.
I hear you, and at the same time, we don't generally write Rust RFCs in IETF RFC 2119-style. The spirit of that RFC is that it's important to be clear about what has normative force. It uses certain keywords for that (e.g., "MUST"). In this RFC, that work is done by phrases such as "is prohibited". While I'll keep the suggestion to use those keywords in mind, I'm not planning that revision at this time. |
||||||||||||||||||
|
|
||||||||||||||||||
| [Definitions, questions, and answers]: #definitions-questions-and-answers | ||||||||||||||||||
|
|
||||||||||||||||||
| ### What is AI-generated work? | ||||||||||||||||||
|
|
||||||||||||||||||
| Work is AI-generated when agentic or generative machine-learning tools are used to directly create the work. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I find this definition woefully insufficient. It does not define any of the following:
And you might find this excessively nitpicky, but, no:
It does seem that a lot of these questions are answered later in the RFC, but then why aren't they answered here? I even go over this: we should be reducing the burden on maintainers and contributors by making a policy that's easy to understand. And that means explaining things when relevant, even if redundant. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### What's it mean to be in the loop? | ||||||||||||||||||
|
|
||||||||||||||||||
| To be in the loop means to be part of the discussion — to be an integral part of the creative back and forth. You were in the loop if you were there, engaged, and contributing meaningfully when the creation happened. | ||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. does this unintentionally prohibit things that were created by someone other than the contributor and that someone else used AI tools but actually verified the output? Maybe we should add something saying that this should not be interpreted as prohibiting contributing code that you weren't directly involved in creating because some other person responsibly created it maybe with AI.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Contributing someone else's code would seem weird to me. Why isn't that other person the one submitting the PR? Surely they have a better understanding of the code and would be better placed to respond to feedback?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
That's a good point. In earlier proposals, I had been carefully trying to avoid tripping over disallowing things otherwise allowed by the DCO. Tripped over that here. Thanks for catching this. I'm not immediately certain how to draft around this. How could one be sure, when one's contribution contains (appropriately-licensed) open source code written by a third party acting at arms-length that the third-party author created the code in a way that complies with the policy? Maybe we'll have to exempt that but include the arms-length restriction so that it's not just an easy loophole. But that's getting a bit legalistic. Will think about this. Let me know if you have ideas.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
well, you might be contributing code you copied from somewhere because the other person wrote it a long time ago and/or isn't interested in contributing themselves. a good example is if there's some handy method in a library somewhere that you need but you don't want a whole new dependency just for that method, e.g. promoting
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
to exploit that loophole would require multiple people collaborating which seems rare enough in combination with fully AI generated code that maybe we can just leave the loophole and do something if/when it becomes a problem? maybe just add a sentence like so: This does not mean you can't contribute code written by other people if it has the proper open-source licenses. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This is fairly common when dealing with abandoned, salvaged PRs.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It kind of seems like this scenario is a more general instance of submitting AI assisted work. When you submit a PR you are vouching that you understand it and believe it’s of suitable quality and suitably licensed for inclusion in the Rust project. Whether that was done using generative AI or by mining 30 year old repos for helpful open source code.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, but also you can usually trust a widely used library to have generally correct code, so it doesn't need as thorough a review as the output of a LLM, e.g. if |
||||||||||||||||||
|
|
||||||||||||||||||
| ### What's it mean to check something with care? | ||||||||||||||||||
|
|
||||||||||||||||||
| To check something with care means to treat its correctness as important to you. It means to assume that you're the last line of defense and that nobody else will catch your mistakes. It means to give it your full attention — the way you would pack a parachute that you're about to wear. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I cannot emphasise enough how patronising this sounds. I do not think that we should seriously consider a policy that thinks that we need to define what it means to care, but not what "creating work" means, on a policy about caring about the work you create. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### What's it mean to have reason to believe you understand something? | ||||||||||||||||||
|
|
||||||||||||||||||
| To understand something means that you have a correct mental model of what that thing is, what its purpose is, what it's doing, and how it works. This is more than we expect. You're allowed to be wrong. | ||||||||||||||||||
|
|
||||||||||||||||||
| But you must have *reason* to believe that you understand it. You must have put in the work to have a mental model and a personal theory of why that model is correct. | ||||||||||||||||||
|
|
||||||||||||||||||
| It's not enough to just have heard a theory. If you can close your eyes and map the thing out and why the thing is correct — in a way that you believe and would bet on — then you have reason to believe you understand it. | ||||||||||||||||||
|
Comment on lines
+69
to
+73
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I hate going line-by-line but this is section shows a fundamental misunderstanding about how policy should be written. You seem to have the impression that policy is based upon rigid definitions. But this could not be further from the truth. Let me share an example I had previously included on the other RFC, but which was deleted/redacted: Wikipedia has an excellent essay on this, WP:BEANS:
The actual policy for this is WP:OPAQUE, which points out that there are a lot of valid reasons for not explaining certain violations of the rules. There are so many different facets to this, like how one of the things that people who explicitly break the rules want is acknowledgement, and not giving them that lets them know that their behaviour is unacceptable. Additionally, giving people ideas about all the cool new fun ways they can break the rules is a bad idea, as mentioned above. The ruder quip to respond to this is, so, as long as I'm clueless, it's okay? The point here is that it does not matter whether a person genuinely believes whether they're being reasonable. The point is to define what reasonable is, what the consequences are for being unreasonable are, and why we think that's reasonable. I've mentioned elsewhere that I've written my own (currently unshared) version of a policy, and let me just quote to explain:
Here, I cover all these cases:
Note that this is arguably as vague as what you've proposed, but with the noticeable difference that it's explicitly on the reasons for policy, not the mechanisms of it. Using the same example I mentioned, it is entirely within the bounds of this policy for someone providing "performance improvements" to think they understand the mechanism of it and to be totally wrong, and have put in insufficient work to prove it. And I personally think that person should be punished with the very reasonable punishment of just turning them away and asking for them to put more work into it. Right now I have no idea what this section's purpose even is. What does it mean to have a reason to believe you understand something: it means to have a reason to believe. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### What's it mean to be able to explain something to a reviewer? | ||||||||||||||||||
|
|
||||||||||||||||||
| Reviewers need to build a mental model of their own. They may want to know about yours in order to help them. You need to be able to articulate your mental model and the reasons you believe that model to be correct. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### What's it mean to proxy output directly back to a reviewer? | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe less of an issue, but do we also want to say reviewers can’t just proxy LLM generated review comments back to the code author? |
||||||||||||||||||
|
|
||||||||||||||||||
| Reviewers want to have a discussion with you, not with a tool. They want to probe your mental model. When a reviewer asks you questions, we need the answers to come from you. If they come from a tool instead, then you're just a proxy. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### Does this policy ban vibecoding? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy bans vibecoding. Andrej Karpathy, who originated the term, [described](https://x.com/karpathy/status/1886192184808149383) *vibecoding* as: | ||||||||||||||||||
|
|
||||||||||||||||||
| > There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists... I "Accept All" always[;] I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment [—] usually that fixes it. The code grows beyond my usual comprehension... Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away... [I]t's not really coding — I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works. | ||||||||||||||||||
|
|
||||||||||||||||||
| If you didn't read the diffs, then you can't have checked the work with care, you can't have reason to believe you understand it, and you're not in a position to explain it to a reviewer without feeding the questions to the tool and proxying the output back. If it's grown beyond your comprehension, then even reading the diffs won't help — you don't understand it, won't be able to explain it, and can't say you've checked it with care. | ||||||||||||||||||
|
|
||||||||||||||||||
| Violating even one of these policy items is enough to violate the policy. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### Does this policy ban slop? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy goes further than banning slop. *Slop* is unwanted, **low-quality**, AI-generated work. This policy does not consider the quality of the work. High-quality AI-generated work is still prohibited if it fails any item in the policy — e.g., because it was *vibecoded*. If you weren't in the loop, didn't check the work with care, don't have reason to believe you understand it, or can't explain it to a reviewer, then the contribution is prohibited — regardless of the quality of the work. | ||||||||||||||||||
|
traviscross marked this conversation as resolved.
Outdated
|
||||||||||||||||||
|
|
||||||||||||||||||
| ### Does this policy ban fully automated AI-generated contributions? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy bans fully automated AI-generated contributions. These are the worst of the unwanted contributions that have come our way, and each item in the policy independently bans these. | ||||||||||||||||||
|
|
||||||||||||||||||
| If you created the work in a fully automated way, then you weren't in the loop, you can't have checked it with care, you can't have reason to believe you understand it, and you're not in a position to explain it to a reviewer without feeding the questions to the tool and proxying the output back. | ||||||||||||||||||
|
|
||||||||||||||||||
| Violating even one of these policy items is enough to violate the policy. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### When contributions appear to fall short of this policy, what do reviewers do? | ||||||||||||||||||
|
|
||||||||||||||||||
| Reviewers may reject any contribution that falls short of this policy without any explanation. A simple link to this policy is sufficient. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### Should reviewers investigate to determine if AI tools were used? | ||||||||||||||||||
|
|
||||||||||||||||||
| There's no need to investigate to determine if AI tools were used. If the contribution seems on its face to fall short, then just reject the contribution, link to this policy, and, at your discretion, notify the moderators. | ||||||||||||||||||
|
kennytm marked this conversation as resolved.
Outdated
alice-i-cecile marked this conversation as resolved.
Outdated
|
||||||||||||||||||
|
|
||||||||||||||||||
| ### What should I do if my contribution was rejected under this policy? | ||||||||||||||||||
|
|
||||||||||||||||||
| If your contribution was rejected under this policy, first, step back and honestly evaluate whether your contribution did in fact fall short. We appreciate people who are honest with themselves about this. If your contribution failed even one of the policy items above — in letter or spirit — then it fell short of this policy. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Quite honestly, from the perspective of someone being told they've broken a policy, this really, really does not instill confidence in the people enforcing it. It very much reads as "if you broke the policy, well, think really hard about what you've done." It doesn't matter if someone did wrong; it just feels very rude and accusatory to just tell them that the best thing they can do is accept they were wrong and reflect, because, that's ultimately the implicit, fallback advice. The real question is, should they have been rejected? Should the policy be changed? And this seeks to answer none of those questions or to justify the policy at all. And that's one of the many reasons why I dislike it; it just assumes from the beginning that this is the correct policy, and that if you're asking why, then you're asking the wrong question. Like, we should not just start from the position that our policies are right. We should be confident in why and not afraid to explain it. |
||||||||||||||||||
|
|
||||||||||||||||||
| If your contribution fell short, reflect on what you could do better. We need contributors who put heart into their contributions — not just point a tool at our repositories. If you do want to contribute, then put a lot of care and attention into your next contribution. If you've already been banned, then reach out to the moderation team and talk about what you've learned and why you want to contribute. | ||||||||||||||||||
|
|
||||||||||||||||||
| If you're sure that your contribution didn't fall short but you're a new contributor, see the next item. As a new contributor, it's difficult to use these tools in a way that won't appear to reviewers as falling short. We encourage you to try again without using generative AI tools, especially for authorship. | ||||||||||||||||||
|
traviscross marked this conversation as resolved.
Outdated
alice-i-cecile marked this conversation as resolved.
Outdated
|
||||||||||||||||||
|
|
||||||||||||||||||
| In other cases, please understand that we will sometimes make mistakes. Explain concisely why you believe the contribution to be correct and compatible with this policy; someone will have a look. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### As a new contributor, is it OK to use AI tools? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy does not prohibit anyone from using AI tools. But as a new contributor, it's a good practice to first contribute without using generative AI tools, especially for authorship. Using these tools correctly is difficult without a firm baseline understanding. Without this understanding, it's easy to use these tools in a way that will fall short (or appear to reviewers as falling short) of this policy. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### What if I follow the policy but my work sounds like the output of an LLM? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy does not prohibit work — that otherwise complies with the policy — from merely *sounding like* the output of an LLM. But keep in mind that we want to hear from you, not from a tool, so we encourage you to speak in your own voice. A contribution that sounds like it came from an LLM will, in practice, have a higher risk of being rejected — as a false positive — by a reviewer, even if it complies with this policy. | ||||||||||||||||||
|
kennytm marked this conversation as resolved.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't like this sentence because, well, we explicitly have set out in the policy that violations are not verified. It just doesn't feel like the project genuinely is trying to solve an issue if the answer to "I'm getting accused of having an unauthentic voice" is "well, sorry!" I don't think that we should be pushing away contributors for their tone of voice. I think we should be having an honest discussion back and forth about what we want to see from contributors. Just turning away people for what LLMs sound like feels rude. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### Does this apply to PRs, issues, proposals, comments, etc.? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy applies to pull requests, issues, proposals in all forms, comments in all places, and all other means of contributing to the Rust Project. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This really shows why a policy needs to be written, not just a hastily-put-together bulleted-list. This is effectively one of the first definitions that should be present in the policy, and instead it's relegated to a Q&A at the bottom. While I think that having redundant Q&A is fine, I think that the fact that this policy is so segmented into underspecified sections makes it feel like the policy is posing a riddle to me to solve. When like. The entire point is about reducing the burden for maintainers! If the policy is a burden to read it's caring about the burden on maintainers but not the burden on the contributors at all. And, inevitably, it creates a burden too on maintainers since they're the ones who have to enforce it, and they need to understand it too. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### By not banning use of AI tools, does this RFC endorse them? | ||||||||||||||||||
|
|
||||||||||||||||||
| By not banning use of AI tools, this RFC does not endorse their use. People in the Project have diverse views on generative AI and on its use. This RFC takes no position — positive or negative — on the use of these tools beyond forbidding those things the policy prohibits. | ||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
|
|
||||||||||||||||||
| ### Is this the final policy for contributions or for AI-assisted contributions? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy is intended to solve the problem in front of us. The world is moving quickly at the moment, and Project members are continuing to explore, investigate, learn, and discuss. Other policies may be adopted later, and this RFC intends to be easy for other policies — of any nature — to build on. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### Does this policy require disclosure of the use of generative AI tools? | ||||||||||||||||||
|
|
||||||||||||||||||
| This policy does not require disclosure of the use of generative AI tools. This is a complex question on which Project members have diverse views and where members are continuing to explore, investigate, learn, and discuss. Later policies may further address this. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe that disclosure of authorship should be required (which can go beyond AI, e.g. to acknowledge co-authors). When reviewing student work, I have found it very helpful to have a clear statement of whether AI was involved or not, since it reduces the guessing game in many cases. If someone falsely declares to have not used AI, it can also simplify moderation choices. While I would understand if a specific policy on disclosure is postponed so that the larger policy can be agreed upon more quickly, I do think disclosure should follow soon after. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that disclosure is a vital part of verifying whether work actually involved LLMs or was just someone's own effort. Sure, people can lie, but then we can litigate honesty instead of just this vague… was tool involved? Dunno, can't guess, but won't ask either. One thing that kind of shocked me when doing research into existing policies (which this policy did not do), was that disclosure was required across the board by projects with well-defined policies, and it didn't matter at all what the project's views on LLMs were. It was only the super underspecified policies that didn't include disclosure, and didn't even offer it as a suggestion. I don't think that disclosure is controversial. I think that a lot of people think it helps even if they like LLMs; it lets you know what you're working on. And, similarly, I think that we can create a disclosure policy that doesn't punish people harshly if they forget. It just feels like a misunderstanding of the situation to say that disclosure is controversial. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### Can teams adopt other policies? | ||||||||||||||||||
|
|
||||||||||||||||||
| This RFC adopts a minimum policy for the Project. It does not prohibit teams from adopting more specific ones. | ||||||||||||||||||
|
alice-i-cecile marked this conversation as resolved.
Outdated
|
||||||||||||||||||
|
|
||||||||||||||||||
| At the same time, there is a cost to having different policies across the Project: it risks surprise and confusion for contributors. By adopting a policy that represents those items on which we have wide agreement and that addresses the concrete problems we're seeing across the Project, we hope to create less need for custom policies and more certainty for contributors. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Genuinely not sure what this sentence is trying to accomplish. Like, this is kind of thing you put in the drawbacks section of an RFC when there aren't really any drawbacks and you have to include the obvious ones so it's not empty. Like, yes, it's obviously a drawback of multiple policies that they need to be concurrently followed and that's just a lot to take in. But, I think it's more important to discuss what's being done about it, than to just say it's bad to have multiple policies. Like, an active compromise being made on this policy is to leave a lot of gaps for potential team-specific policy to fill in. That leads to the drawbacks listed here, thus, it should probably be justified. I don't really see any justification, just a vague reminder that having too much to read can be confusing. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### What about public communications? | ||||||||||||||||||
|
|
||||||||||||||||||
| This RFC does not have any policy items focused on the public communications of the Project. But proposals for Project communications are contributions and must follow this policy. Later policies may further address this. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is genuinely confusing to me. So, like, a blog post isn't included in this policy, but a PR for the blog is? And all "comments" on the project are, so, that kinda includes the public ones? Like, genuinely more confused by this after reading it. |
||||||||||||||||||
|
|
||||||||||||||||||
| ### Does this policy make a distinction between new and existing contributors? | ||||||||||||||||||
|
|
||||||||||||||||||
| New and existing contributors are treated in the same way under this policy. All contributors — including all Project members — may only make contributions that are compatible with this policy. | ||||||||||||||||||
|
|
||||||||||||||||||
| At the same time, new contributors face additional challenges in using generative AI tools to produce contributions that reviewers will recognize as compatible with this policy. It's a good practice for new contributors to first work without using generative AI tools, especially for authorship, to build the baseline understanding required. | ||||||||||||||||||
|
|
||||||||||||||||||
| ## Other questions and answers | ||||||||||||||||||
|
|
||||||||||||||||||
| ### Is requiring that contributors take care an acceptable policy item? | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Is being nice to each other something we should actually encode into our policy?" is a ridiculous question to ask when we have a code of conduct that explicitly encourages being nice to each other. It is already policy to be nice to each other. Asking whether we should encode that in policy is a ridiculous question to even pose. And no, I don't think "contributors taking care" is in any way distinguishable from "people being nice to each other"; the main issue is that this policy itself does nothing to draw this distinction. The main motivation for the policy was people reducing the burden on maintainers, and a big issue is that many people are unaware of the burden they create. Instead of pointing this out, the RFC just tells people they need to "take care" and tries to justify that in and of itself, instead of pointing out the real problem. It's not a negative value judgement to tell people that they're being burdensome. In fact, it's respectful, because people like to know if they're doing something wrong so they can fix it. If I were being cynical, and forgive me for being so, I would say that this RFC doesn't want to even imply that some LLM users might be burdensome simply for their LLM usage, when this is well-known by basically everyone on all sides of the for-against-LLMs argument. This tech gives you an unprecedented ability to put in a little amount of work and make a lot of work for someone else. This is a quality many tools have. I have no idea which things I've actually shared at this point, but at some point, I decided to use this analogy:
In this analogy, a shovel is the tool that allows a little work for you to create a lot of work for someone else. Does that mean that all shovels are bad? No, it just means that we don't allow unrestricted shovel use in public. And that's not even a made-up analogy! It's just true. |
||||||||||||||||||
|
|
||||||||||||||||||
| To take care is to give something your full attention and treat its correctness as important to you. That's a meaningful distinction. As reviewers, we can tell when someone has taken care and when the person has not — there are many signs of this. | ||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As a seasoned reviewer, I am very skeptical of the claim that reviewers can reliably tell when people have or have not taken care, especially in the context of LLM-assisted work. |
||||||||||||||||||
|
|
||||||||||||||||||
| At the same time, taking care is just one requirement of the policy. If a contribution is prohibited by any item in the policy, then it's prohibited by the policy. A contribution may be rejected under this policy even if we cannot tell whether the person took care. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### Is requiring that contributors have reason to believe they understand an acceptable policy item? | ||||||||||||||||||
|
|
||||||||||||||||||
| Even the best contributors may sometimes misunderstand their own contributions. We do not require that people actually understand the things they submit. But we expect contributors to have *good reason* to expect that they understand what they're submitting to us. This is reasonable to ask, and it's a prerequisite for a contributor being able to explain the contribution to a reviewer and have a productive conversation. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This wording makes me even less confident in the original discussion about understanding things. I think we should require people to understand the things they submit, under reasonable circumstances. Again, from my own words, the quote:
I think it's completely reasonable to state that people should understand things and not just think they understand them. But I also think we should be understanding of well-intentioned people who thought they understood, but didn't. This is why policies should focus on their reasons for existing, and not to just come up with some convoluted mechanics to justify those reasons without stating them. You get weird situations like this where, again, the less favourable response to this argument is "it's okay to be clueless." There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Clearly stating the reasons for a policy's existence is extremely useful for both moderators and those who must follow policies to follow them accurately. It really helps model the spirit of the law. |
||||||||||||||||||
|
|
||||||||||||||||||
| At the same time, having reason to believe that one understands the contribution is just one requirement of the policy. If a contribution is prohibited by any item in the policy, then it's prohibited by the policy. A contribution may be rejected under this policy even if we cannot tell whether the person had good reason for that belief. | ||||||||||||||||||
|
|
||||||||||||||||||
| ### Should the policy require care and attention proportional to that required of reviewers? | ||||||||||||||||||
|
|
||||||||||||||||||
| An earlier version of the draft that became this RFC stated: | ||||||||||||||||||
|
|
||||||||||||||||||
| > Submitting AI-generated work without exercising care and attention proportional to what you're asking of reviewers is prohibited. | ||||||||||||||||||
|
|
||||||||||||||||||
| Is that needed? In drafting this RFC, it came to feel redundant. In explaining what it means to check work carefully, we say that this means to check something with care, to treat its correctness as important to you, and to give it your full attention. That's exactly what it means to exercise care and attention proportional to what's being asked of a reviewer. | ||||||||||||||||||
|
|
||||||||||||||||||
| ## Acknowledgments | ||||||||||||||||||
|
|
||||||||||||||||||
| Thanks to Jieyou Xu for fruitful collaboration on earlier policy drafts. Thanks to Niko Matsakis, Eric Huss, Tyler Mandry, Oliver Scherer, Jakub Beránek, Rémy Rakic, Pete LeVasseur, Eric Holk, Yosh Wuyts, David Wood, Jack Huey, Jacob Finkelman, and many others for thoughtful discussion. | ||||||||||||||||||
|
|
||||||||||||||||||
| All views and errors remain those of the author alone. | ||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Genuinely respect you including this. No caveats. |
||||||||||||||||||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Context: https://github.com/rust-lang/rfcs/pull/3951#issuecomment-4286674950Do we assume this RFC 3950 is already in effect? 🤔
Additionally, assuming this RFC is merged eventually, can it be retroactively applied to all open PRs and issues etc?
View changes since the review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most people would understand a new policy to apply prospectively, unless clearly stated otherwise.