Skip to content
Open
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@
- [Project groups](./governance/project-groups.md)
- [Policies](./policies/index.md)
- [Crate ownership policy](./policies/crate-ownership.md)
- [LLM usage policy](./policies/llm-usage.md)
- [Infrastructure](./infra/index.md)
- [Other Installation Methods](./infra/other-installation-methods.md)
- [Archive of Rust Stable Standalone Installers](./infra/archive-stable-version-installers.md)
Expand Down
2 changes: 2 additions & 0 deletions src/how-to-start-contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,8 @@ To achieve this goal, we want to build trust and respect of each other's time an
- Please respect the reviewers' time: allow some days between reviews, only ask for reviews when your code compiles and tests pass, or give an explanation for why you are asking for a review at that stage (you can keep them in draft state until they're ready for review)
- Try to keep comments concise, don't worry about a perfect written communication. Strive for clarity and being to the point

See also our [LLM usage policy](./policies/llm-usage.md).

[^1]: Free-Open Source Project, see: https://en.wikipedia.org/wiki/Free_and_open-source_software

### Different kinds of contributions
Expand Down
139 changes: 139 additions & 0 deletions src/policies/llm-usage.md
Comment thread
oli-obk marked this conversation as resolved.
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
## LLM Usage Policy

For additional information about the policy itself, see [the appendix](#appendix).

### Overview
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this policy hasn't made everyone happy, I think it's struck a reasonable balance that seems to have resonated with a decent number of people. I think one of the reasons for that is that it avoids too much "framing".

By "framing", I'm thinking about how we introduce the policy; how we talk about our thinking on AI as a project; and what the basis for a policy is. I think there's still a decent amount of disagreement on that and this explains why it's been a little difficult to make this more or less restrictive.

There are two dominant starting points in our discussions so far - some of the project are starting from a position of "use of these tools are unethical for various reasons, therefore I don't think we should have anything to do with them, and so my starting point in this discussion is to prohibit all use" ("the ethical framing"), and others don't fully share those ethical concerns, so their starting position is "we've always let people whatever tools they want, and only introduced restrictions when they could be justified by an impact on the ability of the project to do its job (i.e. impact social cohesion, quality of the toolchain, sustainability of reviews, etc)" ("the pragmatic framing"). Starting points like these constitute the basis for any policy in the project - on what grounds we're imposing a restriction or permitting something - and this policy doesn't have much of one.

This policy focuses mostly on what we permit or restrict, and while it touches on why briefly, it doesn't say too much - so you can read it as "these are the things I'm willing to compromise on from a prohibition on everything" or as "these are the restrictions we've landed on that are justified by impacts on the project".

My concern is that without framing, we leave people to fill the gaps themselves and come to their own (potentially incorrect) conclusions about what the basis for this policy is. Within the project, that could be a problem if someone proposes a future change to this policy, because the author might have a "pragmatic framing" - seeing their amendment as justified by what we then know about the impact on the project - while a reviewer might see the amendment as an unreasonable ask for more compromise on total prohibition, as they understood the basis for this policy as coming from an "ethical framing". Outside the project, people might read this and see it as the project making an endorsement of AI because it permits some use, because they are inferring an "ethical framing", when an explanation of the "pragmatic framing" might be have cleared that up.

I think the appropriate framing for the project to take with regards to policies like this is the "pragmatic framing". We're all entitled to our ethical concerns and criticisms of LLMs, and those perspectives are absolutely valid, but I believe it's very tricky to make them a solid basis for policy that encourages a diverse and varied community such as ours to work together. To give a silly example: if had the strong stance that we should ban all contributions that didn't use British English, because that's objectively correct and there's a King who'll say so, and that was a strong ethical/moral stance for me (feel free to replace this with a stance that you find more compelling), then how do we decide what to do with that? Others will disagree, so who wins? Whose ethical stance is correct? We could litigate the actual debate - but I don't think any of us want that. We could just pick the side of the person with the concern - but then we're effectively proscribing a "correct view" for contributors to have, and the more we do that, the fewer people will agree with every concern that has become policy - alienating more people from the project. It just isn't a tenable basis for policy. It might have been ten years ago with a much younger Rust, but for almost any issue, the ship has sailed, we've already got valued contributors who disagree on most topics. I want to be part of a project where we can each have our strongly-held perspectives, as long as we treat each other with dignity and respect, and can co-exist with those who might disagree - an issue is only relevant when it affects the project's ability to do its job (as some of the concerns with AI are keen demonstrations of, though not all of them).

As such, I think the only practical basis for this policy is the "pragmatic framing", and as I've said above, I think we should include some preamble to any policy like this that describes the basis for the policy. I'm reminded of @nikomatsakis's earlier wording along these lines:

There is not yet a full consensus within the Rust org about when/how/where it is acceptable to use AI-based tools. Many members of the Rust community find value in AI; many others feel that its negative impact on society and the climate are severe enough that no use is acceptable. Still others are working out their opinion.

Despite these differences, there are many common goals we all share:

  • Building a community of deep experts in our collective projects.
  • Building an inclusive community where all feel welcome and respected.

We are therefore adopting a nuanced policy aimed at promoting these goals while allowing for individuals to differ in other ways.

I had some similar phrasing in early sketches I had from a while back:

Unfortunately, it's simply impossible to do that [say what Rust's position on AI is]. For this statement to come to a single conclusion on the practical and ethical concerns around AI, the Rust project would need to agree on any of these issues, and we don't!

That shouldn't come as any surprise. Rust is built by over 200 people from different countries, cultures, backgrounds and experiences - there are going to be very few things we all agree on, that's inevitable. Our diversity has always been our strength - Rust has always strived to be a project where we can put aside our differences, treat each other with dignity and respect, and focus on building the best programming language that we can.

I don't want this comment to expand the scope of this policy too much - it's good that it is narrow and specific and concise, but I think it'll be easier to get people on-board when we're clear about the basis for the policy, as that has implications for whether it can evolve, and also in avoiding misinterpretation. I'm not tied to any of the specific phrasing in the quotes above, but feel free to use them as a starting point if inclined to act on this comment.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

<3 I like this a lot and I think it's the core of what makes this policy work. I've made this "overview" section shorter, but added a longer "motivation and guiding principles" section towards the end, with a modified version of Niko's quote.


Using an LLM while working on `rust-lang/rust` is conditionally allowed.
However, we find it important to keep the following points in mind:

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.
- LLMs are a new technology, and we are still learning how to use, moderate, and improve them.
Since we're still learning, we have chosen an intentionally conservative policy that lets us maintain the standard of quality that Rust is known for.

Therefore, the guidelines are roughly as follows:

> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to **create**.
Comment thread
jyn514 marked this conversation as resolved.

> LLMs work best when used as a tool to write *better*, not *faster*.

Comment on lines +16 to +17
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> LLMs work best when used as a tool to write *better*, not *faster*.
> In `rust-lang/rust`, please do not use LLMs as a tool to write *faster*.

Having this as a high-level summary is offering a judgement on LLMs that feels like it isn't necessary for the policy, and makes consensus more difficult to reach. For anti-LLM folks it's saying that they work best when used to write "better", which is a point in dispute. I would also expect (but don't want to put words in people's mouths) that for pro-LLM folks the point that they don't work well when used to work faster may be in dispute.

I've tried to rephrase this in a fashion that, rather than expressing a general statement on when "LLMs work best", is instead expressing what is desired *for rust-lang/rust as that's the scope of this policy.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is adapted from a quote by @ubiratansoares. This edit changes the quote beyond recognition, and I would rather remove it than edit this much.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I think it would be best removed, on the basis that previous line covers similar territory and seems less controversial.

Copy link
Copy Markdown
Member

@Kobzol Kobzol Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tbh I don't actually understand what this quote is supposed to mean; if anything, I would phrase it the other way around (you can use LLMs to do [things you can already do] to get them done faster, but you shouldn't use them to do things you don't already know how to do you).

Copy link
Copy Markdown
Member

@AndyGauge AndyGauge May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, it was the

LLMs work best when used as a tool to write better, not faster.

that I took back to my team and reworked our approach to AI-generated code. I think that statement itself has a lot of weight.

#### Legend

- ✅ Allowed
- ❌ Banned
- ⚠️ Allowed with caveats. Must disclose that an LLM was used.
Comment thread
jyn514 marked this conversation as resolved.
- ℹ️ Adds additional detail to the policy. These bullets are normative.

### Rules

#### ✅ Allowed
The following are allowed.
- Asking an LLM questions about an existing codebase.
- Asking an LLM to summarize comments on an issue, PR, or RFC.
- ℹ️ This does not allow reposting the summary publicly. This only includes your own personal use.
- Asking an LLM to privately review your code or writing.
- ℹ️ This does not apply to public comments. See "review bots" under ⚠️ below.
- Writing dev-tools for your own personal use using an LLM, as long as you don't try to merge them into `rust-lang/rust`.
- Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
Please refer to [our guidelines for fuzzers](https://rustc-dev-guide.rust-lang.org/fuzzing.html#guidelines).
- ℹ️ This also includes reviewers who use LLMs to discover flaws in unmerged code.

#### ❌ Banned
The following are banned.
- Comments from a personal user account that are originally authored by an LLM.
- ℹ️ This also applies to issue bodies and PR descriptions.
- ℹ️ See also "machine-translation" in ⚠️ below.
- Documentation that is originally authored by an LLM.
- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
- ℹ️ This includes *any* doc comments, or non-trivial source comments.

Reordering this to make it clear first and foremost that "Documentation" includes any doc comments, moving "non-trivial source comments" second. This also drops the quantitative "multiple paragraphs"; some multi-paragraph comments may be trivial, and some one-sentence comments may not be.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are using an LLM to write a multi-paragraph comment that is trivial, IMO that should also be banned. If you have a load-bearing single-line comment, I think that falls under "code changes authored by an LLM", although I'm not sure how to say that concisely.

- ℹ️ This includes compiler diagnostics.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This includes compiler diagnostics.
- ℹ️ This includes compiler diagnostics or similar user-visible output.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot be exhaustive in this policy and I think it hurts us to try.

- Code changes that are originally authored by an LLM.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels overly restrictive in the current wording in a way that I'm not sure I really am comfortable not raising a concern as compiler team member.

There is some nuance here that this doesn't capture that I think should be. Certainly, I think in general, I'm happy to ban "unsolicited" code that is LLM-generated, but I think that an outright ban on all "non-trivial" LLM-generated code is too strong. I'd like to see LLM-generated code allowed under the following strong caveats:

  • The reviewer is pre-decided, and has agreed to review LLM-generated code
    • Importantly, this does not mean a PR can be opened and then picked up by an "LLM-friendly" reviewer
  • The code is well-reviewed (meaning, that the reviewer is committing to ensuring they fully understand the code, well enough that they could easily have written it themselves; and the author has also reviewed the code)
  • Changes are "non-critical" (such as a non-compiler tool, code under a feature gate, diagnostics, etc.)

I personally think this is a pretty reasonable space to carve out for "experimentation": it doesn't subject reviewers who don't want to review LLM-generated code to unwanted reviews, it helps to ensure that code stays high-quality, and it limits fallback of any "mistakes" in the process.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The code is well-tested" is another valuable caveat to add here. Requiring this is much less onerous in the context of LLM-assisted code.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. I think it's a standard we want to hold for all contributions, but doesn't always get met. It's a nice position to have here.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd quite like to see an explicit carve out for teams or even individuals to do some experimentation - in specific areas or with specific maintainers that wouldn't affect maintainers who aren't interested in participating. Teams would obviously need to decide if they wanted to have such an experiment, but it would be useful input to any future revisions - e.g. "hey, we tried this in a controlled environment over here and we actually found it useful and helpful, maybe we could consider relaxing this point", etc.

- ℹ️ This does not include "trivial" changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html), which fall under ⚠️ below.
- ℹ️ Be cautious about PRs that consist solely of trivial changes.
See also [the compiler team's typo fix policy](https://rustc-dev-guide.rust-lang.org/contributing.html#writing-documentation:~:text=Please%20notice%20that%20we%20don%E2%80%99t%20accept%20typography%2Fspellcheck%20fixes%20to%20internal%20documentation).
- See also "learning from an LLM's solution" in ⚠️ below.
- Treating an LLM review as a sufficient condition to merge or reject a change.
LLM reviews, if enabled by a team, **must** be advisory-only.
Teams can have a policy that code can be merged without review, and they can have a policy that code must be reviewed by at least one person,
Comment on lines +51 to +52
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that this is limited to rust-lang/rust, probably better to just restrict to no LLM reviews.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually really want to keep allowing LLM reviews. I think they're low-risk and give people a chance to see whether the bot catches real issues.

but they may not have a policy that an LLM review substitutes for a human review.
- ℹ️ See "review bots" in ⚠️ below.
- ℹ️ An LLM review does not substitute for self-review. Authors are expected to review their own code before posting and after each change.

#### ⚠️ Allowed with caveats
The following are decided on a case-by-case basis.
In general, new contributors will be scrutinized more heavily than existing contributors,
since they haven't yet established trust with their reviewers.

- Using an LLM to generate a solution to an issue, learning from its solution, and then rewriting it from scratch in your own style.
Comment thread
jyn514 marked this conversation as resolved.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, see my comment on the "Code changes that are originally authored by an LLM." ban, but I do like laying out this "less-restrictive" point explicitly. I would move the "asking for details about how you generated the solution" to under this point, but modify it heavily.

Rather than stating like "we need to know exactly what you said to the LLM and what model you used", I think a better approach is saying something like "You should be prepared to share the details of the direction you gave to the LLM. These may include general prompts or design documents/constraints."

I'm not sure that sharing the exact prompts or output, or the exact model does anything. What's the reasoning? I'm much more interested in what direction the author intended to take.

If the idea is to be able to "recreate" or "oversee" what the author did, that's just never going to work. This isn't something we can reasonably expect reviewers at large to do. Rather, if anything, this is something that I could see from a more mentor/mentee relationship. If it ever is at the point that a "random" reviewer wanted or needed to see this, then the PR likely just needs to be closed and further discussion should happen elsewhere before continuing.

- Using machine-translation (e.g. Google Translate) from your native language without posting your original message.
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- Using an LLM as a "review bot" for PRs.
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm OOTL but I find this section situationally strange — where did the "review bot" come from?

IME AI-powered review bots that directly participates in PR discussions (esp the "app" ones) are configured by repository owner, but AFAIK r-l/r (which this policy applies solely to) did not have any such bots. I highly doubt a contributor will bring in their own review bot in public. So practically this has to be either

  • someone requested a review from Copilot, which may be we can opt-out?
  • the reviewer outsourced the review work to a coding agent, which is already covered in the sections
  • at least one team actually considered enabling such review bots in the future? as this is linked previously in that "Teams can have a policy that code can be merged without review" part, but I don't think this will ever happen given the the stance of this policy

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I highly doubt a contributor will bring in their own review bot in public.

I wish it worked like that :( People can just trigger GitHub copilot, or I suppose any other review bot, and let it comment on a r-l/r PR. Some people don't even do it willingly, but GH does it automatically for them, as GH copilot has a tendency to re-enable itself even if you sometimes disable it.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Yeah currently disabling review is a personal/license-owner setting, it is not possible to configure from the repository PoV 😞 but I think this is something that we may bring up to GitHub.

It may be possible to use content exclusion to blind Copilot, but I'm not sure if this hack is going to produce any overreaching effects (e.g. affecting private IDE usage too).

Copy link
Copy Markdown
Contributor

@apiraino apiraino Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

someone requested a review from Copilot, which may be we can opt-out?

I think this is exactly the point of pointing that out in our policy. Some people trigger a "[at]copilot review" in our repos without asking us for consent. This is rude behaviour and we don't want that.

And, yes, as you point out opting out of this "trigger" is currently only a project-wide setting, not at a repository level so we are looking with GitHub if they could make this setting more fine-grained (here on Zulip a discussion with the Infra team)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@clarfonthey I understand you are frustrated but it doesn't help to take it out on the people we're working with. Can I ask you to take a break from commenting on this RFC for a bit? Feel free to DM me with any concerns you have about the policy itself.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, you're right; I deleted the comment

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Unsolicited review bots are becoming an increasing problem; for example: https://web.archive.org/web/20260426133344/https://github.com/rust-lang/rust-clippy/issues/16893#issuecomment-4321880160

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for flagging xtqqczze - the same bot has commented in 6+ issues on the rust-clippy repo and in my case was giving unsolicited advice in a completely derailing direction (solving a specific case I obviously already worked around rather than the general case rust-lang/rust-clippy#16901 (comment))

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xtqqczze both rust-lang/rust-clippy#16893 and rust-lang/rust-clippy#16901 are issues not PRs, and that @QEEK-AI account commented spontaneously without any summoning. So I don't think these instances fall under this "Review Bot" rule (which is still "⚠️ Allowed with caveats"). At the very least these are "Comments […] authored by an LLM" which is "❌ Banned", and they are also outright "spam" that the current CoC can already handle.

- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM.
You **must not** post (or allow a tool to post) LLM reviews verbatim on your personal account unless clearly quoted with your own personal interpretation of the bot's analysis.
- ℹ️ Review bot accounts must be blockable by individual users via the standard GitHub user-blocking mechanism. (Note that some GitHub "app" accounts post comments that look like users but cannot be blocked.)
- ℹ️ Review bots that post without being approved by a maintainer will be banned.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned this leaves room for reviewers to trigger a review bot without consent of the author of the PR, which could alienate the PR author. If I opened a PR and it got reviewed by an LLM bot, I would probably close the PR and never try contributing to the project again. I've seen this happen in another project. I think there should be an agreement between the reviewer and PR author before triggering a review bot.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"approved by a maintainer" is the key point here, if an LLM review bot is "approved by a maintainer" it means such is a public decision and should be mentioned in CONTRIBUTING.md, and that's the agreement.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An agreement among maintainers to impose LLM review bots on nonconsenting contributors would drive those contributors away.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a reviewer really wants to use an LLM to review, they could run that LLM on their own, filter through the output to determine what is actually relevant and correct, and post in their own words about the identified problems. That doesn't require bothering a nonconsenting PR author with LLM output.

Copy link
Copy Markdown
Member

@kennytm kennytm May 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrasing LLM output is already addressed in lines 67-68.

The premise of this whole section is that somehow a bot (as a separate account, line 69) can be officially "⚠️ Allowed with caveats" (line 57) for reviewing.

If you think that a review bot account should not be allowed, even if approved by maintainers, this whole thread would be more relevant on the parent item (line 66; I've commented about this before).

P.S. I don't think this policy implies any LLM review bot account will be allowed "right now" or "soon", I believe there must at least be an FCP.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a reviewer really wants to use an LLM to review, they could run that LLM on their own, filter through the output to determine what is actually relevant and correct, and post in their own words about the identified problems. That doesn't require bothering a nonconsenting PR author with LLM output.

Thinking about this further, this seems like an overall better process than having a review bot comment on a PR. There's no room for ambiguity about whether a PR author is responsible for responding to LLM output; only the reviewer who decides to use an LLM is in a position to interpret the LLM output because "Comments from a personal user account that are originally authored by an LLM" are explicitly forbidden.

- ℹ️ If a more reliable tool, such as a linter or formatter, already exists for the language you're writing, we strongly suggest using that tool instead of or in addition to the LLM.
- ℹ️ Configure LLM review tools to reduce false positives and excessive focus on trivialities, as these are common, exhausting failure modes.
- ℹ️ LLM comments **must not** be blocking; reviewers must indicate which comments they want addressed. It's ok to require a *response* to each comment but the response can be "the bot's wrong here".
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's okay to require PR authors to have to say "the bot's wrong here"; the onus should be on whoever triggers the bot to determine whether there's any validity to what the bot posted.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the onus should be on whoever triggers the bot to determine whether there's any validity to what the bot posted.

I don't see how line 73 disagrees with this. The statement "It's ok to require a response" refers to the reviewer requiring a response from the author to address the bot comment, not from the bot itself. The previous statement "reviewers must indicate which comments they want addressed." also suggested that the reviewer has taken the 'onus' of the bot comment. In this scenario I don't find requiring the PR author to say "the bot's wrong here" to dismiss the comment is unfair to the author; in fact, having that 2nd step "reviewers must indicate which comments they want addressed" means the PR author is in fact rejecting the combined analysis of the bot and the reviewer, so I'd say this is more biased against reviewers.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current wording is a bit ambiguous and could conceivably be interpreted to mean that "it's okay to require a response" implicitly. I would like to see this clarified to say explicitly that a bot's comment only needs to be responded to if a reviewer explicitly indicates that.

- In other words, reviewers must explicitly endorse an LLM comment before blocking a PR. They are responsible for their own analysis of the LLM's comment and cannot treat it as a CI failure.
- ℹ️ This does not apply to private use of an LLM for reviews; see ✅ above.

All of these **must** disclose that an LLM was used.

## Appendix

### Moderation policy
#### It's not your job to play detective
["The optimal amount of fraud is not zero"](https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/).
Do not try to be the police for whether someone has used an LLM.
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
Comment thread
jyn514 marked this conversation as resolved.

#### Be honest
Conversely, lying about whether or how you've used an LLM is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.
If you are not sure where something you would like to do falls in this policy, please talk to the [moderation team](mailto:rust-mods@rust-lang.org).
Don't try to hide it.

#### Penalties
The policies below follow the same guidelines as the code of conduct:
Violations will first result in a warning, and repeated violations may result in a ban.
- 🔨 Comments from a personal account originally authored by an LLM
- 🔨 Violations of the "Be honest" section

Other violations are left up to the discretion of reviewers and moderators.
For most cases we recommend closing and locking the PR or issue, but not escalating further.
Copy link
Copy Markdown
Member

@Darksonn Darksonn Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's wrong to treat all violations of AI policy as CoC violations.

When someone submits a PR they wrote with AI assistance and you don't want to merge it, the correct response is just to close the PR and explain to them why it was closed. We should not threaten such contributors with moderation warnings. There really is no reason to tack on a "this is a warning and future violations may result in a ban" on to that explanation. It's an unnecessarily hostile experience to receive that.

Of course, at some point it does become a CoC issue and/or ban-worthy. For example, if you repeatedly don't follow instructions from the maintainers, that's a problem and I think it's fine to ban them for that. Or if it's obviously spam from OpenClaw or whatever, then ban them as spam. That's fine. But we should not treat human contributors like that on first violation.

As an aside on disclosure. I think it's probably right to treat lying as a CoC issue when intentional. This is why I feel somewhat uncomfortable with a disclosure requirement to begin with, though I'm okay with having one, assuming no witch hunts occur.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I think I just phrased this poorly. The "code of conduct bit" is only meant to apply to the first two bullets, not to the paragraph on lines 103-104.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the code of conduct bit extend to this one specifically?

Comments from a personal account originally authored by an LLM

If a new contributor is replying to reviews by copy/pasting from an LLM, I certainly think we should tell them to stop, but I do not think it warrants a moderation warning, for the reasons I outlined above. I do not think this is at all comparable to outright lying.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, ok. I have rewritten this section a bit and removed the bullet singling out comments from a personal account; does it look better now?


Using an LLM does **not** mean it's ok to harrass a contributor.
All contributors must be treated with respect.
The code-of-conduct applies to *all* conversations in the Rust project.

### Responsibility

Your contributions are your responsibility; you cannot place any blame on an LLM.
- ℹ️ This includes when asking people to address review comments originally authored by an LLM. See "review bots" under ⚠️ above.

### The meaning of "originally authored"

This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)".
Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not comfortable with the definition of "originally authored" as written here. Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.

View changes since the review

No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
Comment thread
jyn514 marked this conversation as resolved.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
In the manner the phrase is used in this policy, no amount of editing changes how something was "originally authored"; authorship sets the initial style and it is very hard to change once it's set.

Taking a different approach here, of narrowing the focus to the phrasing in this policy, rather than trying to get people to agree with the fully general statement.

View changes since the review


For more background about analogous reasoning, see ["What Colour are your bits?"](https://ansuz.sooke.bc.ca/entry/23)

### Non-exhaustive policy
Comment thread
jyn514 marked this conversation as resolved.

This policy does not aim to be exhaustive.
If you have a use of LLMs in mind that isn't on this list, judge it in the spirit of this overview:
- Usages that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅
- Usages that use LLMs for creation or show LLM output to another human are likely banned ❌

### Conditions for modification or dissolution
This policy is not set in stone, and we can evolve it as we gain more experience working with LLMs.

Minor changes, such as typo fixes, only require a normal PR approval.
Major changes, such as adding a new rule or cancelling an existing rule, require
a simple majority of members of teams using rust-lang/rust (without concerns).

This policy can be dissolved in a few ways:

- An accepted FCP by teams using rust-lang/rust.
- An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence; as decided by a leadership council FCP.
Loading