docs(kiro): align IDE/CLI setup guidance (#0000)#7070
Conversation
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (5)
WalkthroughThis PR substantially improves editor integration documentation across multiple LSP clients (Emacs, VS Code, OpenCode, Kiro), refactors Perl variable-rename logic to respect package scope, enhances Perl occurrence detection during refactoring, and expands CI tooling with intelligent base-ref resolution and test-failure extraction. Binary naming is harmonized from Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…le_watcher_debouncer_coalesces_bulk_operations (#7070)
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/perl-lsp-rs/src/runtime/file_watcher_debounce.rs (1)
241-268:⚠️ Potential issue | 🟠 MajorMake the bulk-coalescing test deterministic (avoid fixed
sleep).The test currently relies on a fixed
thread::sleep(300ms), which is sensitive to CI scheduling jitter—so even with the relaxedcalls <= 6threshold, this can still intermittently fail if delivery takes longer.Consider polling until
total_uris == 50(with a reasonable timeout), then assertcalls <= 6anduris == 50. This preserves the intent (prove coalescing happened, not 50 separate callbacks) while reducing flake risk.Suggested patch
@@ #[test] fn file_watcher_debouncer_coalesces_bulk_operations() { @@ // Wait for debounce window to expire - thread::sleep(Duration::from_millis(300)); + let deadline = Instant::now() + Duration::from_millis(2000); + loop { + let uris_done = t.load(Ordering::SeqCst); + if uris_done == 50 { + break; + } + if Instant::now() >= deadline { + break; + } + thread::sleep(Duration::from_millis(10)); + } let calls = call_count.load(Ordering::SeqCst); let uris = total_uris.load(Ordering::SeqCst); @@ assert!(calls <= 6, "Expected <=6 batch calls for 50 rapid changes, got {calls}"); assert_eq!(uris, 50, "All 50 URIs should be delivered, got {uris}"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/perl-lsp-rs/src/runtime/file_watcher_debounce.rs` around lines 241 - 268, Replace the fixed thread::sleep(...) wait with a polling loop that repeatedly checks total_uris (AtomicUsize) until it reaches 50 or a reasonable timeout elapses: after scheduling the 50 URIs with FileWatcherDebouncer::schedule, loop sleeping a short interval (e.g., 10–20ms) and break when total_uris.load(Ordering::SeqCst) == 50 or when timeout reached; if timeout is reached fail the test with a clear message, otherwise proceed to assert that call_count.load(Ordering::SeqCst) <= 6 and total_uris == 50. This keeps the rest of the test (call_count, total_uris, FileWatcherDebouncer::with_interval) intact but avoids the brittle fixed sleep.CLAUDE.md (1)
68-76:⚠️ Potential issue | 🟡 MinorFix duplicated
green-tdd-reviewedrow in the sign-off table.In the “Sign-off labels” table,
green-tdd-reviewedappears twice (once for the green-tdd agent, and again as a duplicate row). This will confuse routing/debugging because readers may think there are two distinct sign-offs. Remove the duplicated row or replace it with the intended label.🛠️ Proposed patch (remove the duplicate row)
| `architecture-reviewed` | architecture-reviewer | Design fits microcrate layering and dependency contracts | | `maintainer-issue-reviewed` | maintainer-issue | Issue aligns with project goals, roadmap, user base | -| `green-tdd-reviewed` | green-tdd | Edge case and regression tests added | | `review-reviewed` | reviewer | Standards check passed (banned patterns, scope) | | `maintainer-pr-reviewed` | maintainer-pr | PR implementation fits project direction and quality bar | | `pr-responded` | pr-responder | Bot comments and CI failures addressed |Based on learnings: “The orchestrator routes work to agents, never writes code directly.” (This comment is about keeping the agent/label contract unambiguous.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 68 - 76, The sign-off table contains a duplicated label `green-tdd-reviewed`; remove the duplicate row in the "Sign-off labels" table (or replace it with the intended missing label) so each sign-off label is unique and the mapping between labels and agents (e.g., `green-tdd-reviewed` → `green-tdd`) is unambiguous.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 92-93: The checkout comment is inaccurate because the pr-smoke job
explicitly runs "cargo xtask ci-scope --base origin/master"; update the comment
to either state that full history is required because this job diffs explicitly
against origin/master (i.e., "Fetch full history so ci-scope can diff against
origin/master") or make it generic about requiring full history for base
resolution; locate the comment near the checkout step referenced by the pr-smoke
job and adjust the wording to match the actual command "cargo xtask ci-scope
--base origin/master".
In `@crates/perl-parser/src/refactor/import_optimizer.rs`:
- Around line 452-457: The code currently runs MODULE_USAGE_RE against the full
stripped file text (stripped after STRING_LITERAL_RE, REGEX_LITERAL_RE,
COMMENT_RE) which causes matches inside actual import/use lines and yields false
missing-imports; create a new string (e.g., non_import_content) by removing or
blanking import declarations before scanning (remove lines or spans that match
import patterns such as leading "use " / "pub use " / "extern crate" / "use::<"
etc.), then run for caps in MODULE_USAGE_RE.captures_iter(&non_import_content)
to build usage_map; update the loop that references MODULE_USAGE_RE so it uses
this non-import source instead of &stripped (keep STRING_LITERAL_RE,
REGEX_LITERAL_RE, COMMENT_RE steps intact and only change the input to
MODULE_USAGE_RE).
In `@crates/perl-parser/src/refactor/workspace_refactor.rs`:
- Around line 1023-1035: The loop in find_package_at_offset scans the raw string
`before` for the substring "package " and can match it inside comments/strings
or mid-identifiers; change the logic to only accept real declarations by
verifying the match is a standalone token and not inside a comment/string: after
locating `found` compute the index of the character before `package` and ensure
it is either start-of-file or a delimiter (whitespace, newline, semicolon, '{',
'}', or other non-identifier char), and also perform a minimal comment/string
guard around `package` (e.g., scan backwards for unclosed quotes or comment
markers or use an existing tokenizer) before computing `package_start` and
setting `package = Some(candidate.to_string())`; update references in the loop
(variables `before`, `search_pos`, `package_start`, `candidate`) accordingly so
only declaration occurrences are accepted.
- Around line 873-875: The pre-filter check using
is_document_in_package_scope(&doc.text, &package_name) prematurely skips
documents that contain the target package later in the file; update the loop so
this check is skipped when running in fallback mode (i.e., only apply the
continue when NOT in fallback). Locate the loop in workspace_refactor.rs that
currently does if !is_document_in_package_scope(&doc.text, &package_name) {
continue; } and change it to conditional on the fallback flag (e.g., if
!is_fallback && !is_document_in_package_scope(...) { continue; }) so fallback
searches whole files and does not drop valid matches.
In `@docs/EDITORS/VS_CODE_SETUP.md`:
- Around line 418-420: Update the troubleshooting note about "perllsp --stdio"
hanging to include a brief user instruction on how to exit when running it
manually: add a sentence after the existing explanation that users can press
Ctrl+C (SIGINT) to terminate the process, and optionally mention that Ctrl+D
will close stdin in some shells; target the paragraph that references "perllsp
--stdio" so the new guidance appears immediately after the line stating it is
waiting for framed LSP JSON-RPC input.
- Around line 63-72: The docs hardcode VERSION=0.12.4 which forces updates on
each release; change the snippet to use a clear placeholder (e.g.,
VERSION=<latest-release>) and add a one-line note directing readers to check the
GitHub Releases page for the actual version, keeping TARGET and the asset
pattern perllsp-${VERSION}-${TARGET}.tar.gz unchanged so the commands still work
when users substitute the real version.
In `@docs/how-to/EDITOR_SETUP.md`:
- Line 35: Update the "Amazon Kiro" table row so it explicitly states that Kiro
IDE (using OpenVSX extension EffortlessMetrics.perl-lsp-rs) auto-downloads the
language server and therefore does NOT require a local perllsp on PATH; keep the
existing note that Kiro CLI users must configure a custom LSP `perllsp --stdio`,
but add a clear exception phrase like "Kiro IDE (OpenVSX) auto-downloads perllsp
— no local install required" next to the EffortlessMetrics.perl-lsp-rs reference
to avoid the global prerequisite confusion.
In `@docs/reference/LIVE_SIGNALS_VS_LABELS.md`:
- Around line 69-75: Update the fenced anti-pattern code block under the "Wrong:
merge gate based on `ci-green` label" heading to include a language tag (e.g.,
change the opening fence from ``` to ```text or ```bash) so the block is
recognized by markdownlint and rendered consistently; locate the fenced block
shown as the pseudocode snippet and add the chosen language tag to the opening
fence only.
- Around line 63-66: The doc references a non-existent implementation path
`xtask/src/tasks/queue_reconciler.rs` in LIVE_SIGNALS_VS_LABELS.md; either
update that sentence to point to the actual reconciler implementation or remove
it. Search the repo for the reconciler logic (look for functions/classes that
implement queue reconciliation or label handling) and replace
`xtask/src/tasks/queue_reconciler.rs` with the correct file (e.g., if the logic
lives in `xtask/src/tasks/queue_health.rs` or
`xtask/src/tasks/queue_snapshot.rs` use that filename), or delete the reference
line if no single implementation file exists. Ensure the updated sentence still
correctly describes the behavior and references the correct symbol (the
reconciler implementation) so readers can locate the code.
In `@docs/reference/ORCHESTRATION_DOCTRINE.md`:
- Around line 57-66: The bullet "Is the only thing that authoritatively strips
labels" lacks terminal punctuation and visually merges with the following
sentence "Agents propose; the reconciler disposes"; update the reconciler
bullets so each bullet ends with a period (e.g., change "Is the only thing that
authoritatively strips labels" to end with a period) and ensure a blank line or
paragraph break exists before "Agents propose; the reconciler disposes" so the
two lines are clearly separated.
- Around line 105-123: The fenced ASCII diagram uses an untyped code fence;
update the opening fence to include a language token (e.g., change the leading
``` to ```text) for the diagram block so markdownlint rule MD040 is satisfied;
locate the ASCII diagram block labeled "Shape: gates with agents, three axes,
two layers of state" and replace its opening fence with ```text (and keep the
closing fence unchanged).
In `@docs/tutorials/DAP_USER_GUIDE.md`:
- Around line 11-12: The file's top metadata shows "Version: 0.12.x" but the
"Version History" section still only lists "0.9.x"; add a new "0.12.x" entry
under the Version History section (matching the Date 2026-04-27 from the header)
with a brief summary of the changes or release notes for 0.12.x so the header
and history stay in sync; locate the "Version: 0.12.x" header and the "Version
History" section (which currently contains the "0.9.x" entry) and insert the new
"0.12.x" subsection above or alongside the existing entries.
In `@xtask/src/tasks/ci_scope.rs`:
- Around line 750-789: The fallback order can produce different bases locally vs
CI; update resolve_base_ref to first try the remote's default branch (resolve
origin/HEAD to origin/<branch>) before the hardcoded list: call git rev-parse
--abbrev-ref origin/HEAD (or read refs/remotes/origin/HEAD) to compute a
candidate like format!("origin/{}", branch) and check it with
git_ref_exists(&candidate, root), then proceed to the existing candidates only
if that remote-default candidate is missing; keep the existing warning logic and
preserve git_ref_exists for verification.
In `@xtask/src/tasks/gates.rs`:
- Around line 1044-1049: The code only calls parse_first_failure when status ==
"fail", losing parsed test failure details for cargo test runs that timed out or
otherwise exited but still produced failure output; change the condition that
sets first_failure to call parse_first_failure whenever
is_cargo_test_command(command) is true and the process produced output (e.g.,
stdout not empty) and either status == "fail" or the run timed out (use the
existing timeout/elapsed indicator), e.g. replace the if condition with one that
checks is_cargo_test_command(command) && (status == "fail" || timeout) (and
guard with stdout presence) so parse_first_failure(&stdout, exit_code) runs for
late failing tests as well.
- Around line 1460-1465: The current is_cargo_test_command assumes "test" is the
immediate next token after the cargo executable; update is_cargo_test_command
to, after detecting the cargo executable (first token matching "cargo" or ending
with "/cargo" or "\\cargo"), advance through subsequent tokens and return true
if any token equals "test" (skip flags like "+nightly" or "--locked"), otherwise
false; keep the existing path-prefixed cargo detection and use the tokens
iterator from command.split_whitespace() to scan forward for the first "test"
token instead of checking a fixed position.
- Around line 1374-1410: The code always reads the next non-empty line into
message, which is wrong for old-style panics; change the logic after parsing to:
if parse_panic_site_new_style(rest) returned Some(site) keep the existing
behavior (message is next non-empty line), but if
parse_panic_site_old_style(rest) returned Some(site) instead, extract the inline
message from rest (the single-quoted string before the comma) and set message to
that (unescape quotes as appropriate) rather than reading the following line;
finally, relax the final presence check so FirstFailure is constructed if any of
test_name, site, or message is Some (not just site), ensuring we don’t drop
cases where only message was extracted. Reference: parse_panic_site_new_style,
parse_panic_site_old_style, rest, lines, message, site, test_name, FirstFailure.
---
Outside diff comments:
In `@CLAUDE.md`:
- Around line 68-76: The sign-off table contains a duplicated label
`green-tdd-reviewed`; remove the duplicate row in the "Sign-off labels" table
(or replace it with the intended missing label) so each sign-off label is unique
and the mapping between labels and agents (e.g., `green-tdd-reviewed` →
`green-tdd`) is unambiguous.
In `@crates/perl-lsp-rs/src/runtime/file_watcher_debounce.rs`:
- Around line 241-268: Replace the fixed thread::sleep(...) wait with a polling
loop that repeatedly checks total_uris (AtomicUsize) until it reaches 50 or a
reasonable timeout elapses: after scheduling the 50 URIs with
FileWatcherDebouncer::schedule, loop sleeping a short interval (e.g., 10–20ms)
and break when total_uris.load(Ordering::SeqCst) == 50 or when timeout reached;
if timeout is reached fail the test with a clear message, otherwise proceed to
assert that call_count.load(Ordering::SeqCst) <= 6 and total_uris == 50. This
keeps the rest of the test (call_count, total_uris,
FileWatcherDebouncer::with_interval) intact but avoids the brittle fixed sleep.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 229e7ec5-a9d3-4b6a-ba4c-81aab1fdf0fe
📒 Files selected for processing (27)
.github/workflows/ci.ymlCLAUDE.mdcrates/perl-lsp-rs-core/src/providers/rename/apply.rscrates/perl-lsp-rs-core/src/providers/rename/mod.rscrates/perl-lsp-rs/src/features/workspace_rename.rscrates/perl-lsp-rs/src/runtime/file_watcher_debounce.rscrates/perl-lsp-rs/src/runtime/language/rename.rscrates/perl-parser/src/refactor/import_optimizer.rscrates/perl-parser/src/refactor/workspace_refactor.rscrates/perl-parser/tests/corpus_gap_tests.rsdocs/EDITORS/EMACS_SETUP.mddocs/EDITORS/KIRO_SETUP.mddocs/EDITORS/OPENCODE_SETUP.mddocs/EDITORS/VS_CODE_SETUP.mddocs/how-to/EDITOR_SETUP.mddocs/how-to/TROUBLESHOOTING.mddocs/reference/CONFIG.mddocs/reference/LIVE_SIGNALS_VS_LABELS.mddocs/reference/ORCHESTRATION_DOCTRINE.mddocs/tutorials/DAP_USER_GUIDE.mdinstall.shvscode-extension/README.mdxtask/src/main.rsxtask/src/tasks/ci_scope.rsxtask/src/tasks/gates.rsxtask/src/tasks/targeted_checks.rsxtask/tests/ci_scope_tests.rs
| # Fetch full history so ci-scope can diff against origin/master (auto-detection | ||
| # resolves this first in the fallback chain). 0 = full history. |
There was a problem hiding this comment.
The checkout note no longer matches this job.
pr-smoke still calls cargo xtask ci-scope --base origin/master on Line 127, so this workflow is not exercising the auto-detect fallback chain here. Please reword the comment to describe the explicit origin/master diff, or keep it generic about full-history being required for base resolution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 92 - 93, The checkout comment is
inaccurate because the pr-smoke job explicitly runs "cargo xtask ci-scope --base
origin/master"; update the comment to either state that full history is required
because this job diffs explicitly against origin/master (i.e., "Fetch full
history so ci-scope can diff against origin/master") or make it generic about
requiring full history for base resolution; locate the comment near the checkout
step referenced by the pr-smoke job and adjust the wording to match the actual
command "cargo xtask ci-scope --base origin/master".
| let stripped = STRING_LITERAL_RE.replace_all(content, " ").to_string(); | ||
| let stripped = REGEX_LITERAL_RE.replace_all(&stripped, " ").to_string(); | ||
| let stripped = COMMENT_RE.replace_all(&stripped, " ").to_string(); | ||
| let mut usage_map: BTreeMap<String, Vec<String>> = BTreeMap::new(); | ||
| for caps in usage_re.captures_iter(&stripped) { | ||
| for caps in MODULE_USAGE_RE.captures_iter(&stripped) { | ||
| // Only process if both capture groups matched |
There was a problem hiding this comment.
Missing-import scan should not run on raw content.
Line 452 starts from full file text, so MODULE_USAGE_RE can match module paths inside use ... lines and create false missing imports.
💡 Use non-import source for usage scanning
- let stripped = STRING_LITERAL_RE.replace_all(content, " ").to_string();
+ let stripped = STRING_LITERAL_RE.replace_all(&non_use_content, " ").to_string();
let stripped = REGEX_LITERAL_RE.replace_all(&stripped, " ").to_string();
let stripped = COMMENT_RE.replace_all(&stripped, " ").to_string();
let mut usage_map: BTreeMap<String, Vec<String>> = BTreeMap::new();
for caps in MODULE_USAGE_RE.captures_iter(&stripped) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let stripped = STRING_LITERAL_RE.replace_all(content, " ").to_string(); | |
| let stripped = REGEX_LITERAL_RE.replace_all(&stripped, " ").to_string(); | |
| let stripped = COMMENT_RE.replace_all(&stripped, " ").to_string(); | |
| let mut usage_map: BTreeMap<String, Vec<String>> = BTreeMap::new(); | |
| for caps in usage_re.captures_iter(&stripped) { | |
| for caps in MODULE_USAGE_RE.captures_iter(&stripped) { | |
| // Only process if both capture groups matched | |
| let stripped = STRING_LITERAL_RE.replace_all(&non_use_content, " ").to_string(); | |
| let stripped = REGEX_LITERAL_RE.replace_all(&stripped, " ").to_string(); | |
| let stripped = COMMENT_RE.replace_all(&stripped, " ").to_string(); | |
| let mut usage_map: BTreeMap<String, Vec<String>> = BTreeMap::new(); | |
| for caps in MODULE_USAGE_RE.captures_iter(&stripped) { | |
| // Only process if both capture groups matched |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/perl-parser/src/refactor/import_optimizer.rs` around lines 452 - 457,
The code currently runs MODULE_USAGE_RE against the full stripped file text
(stripped after STRING_LITERAL_RE, REGEX_LITERAL_RE, COMMENT_RE) which causes
matches inside actual import/use lines and yields false missing-imports; create
a new string (e.g., non_import_content) by removing or blanking import
declarations before scanning (remove lines or spans that match import patterns
such as leading "use " / "pub use " / "extern crate" / "use::<" etc.), then run
for caps in MODULE_USAGE_RE.captures_iter(&non_import_content) to build
usage_map; update the loop that references MODULE_USAGE_RE so it uses this
non-import source instead of &stripped (keep STRING_LITERAL_RE,
REGEX_LITERAL_RE, COMMENT_RE steps intact and only change the input to
MODULE_USAGE_RE).
| if !is_document_in_package_scope(&doc.text, &package_name) { | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Fallback pre-filter drops valid matches in multi-package files.
Line 873 can skip documents where the target package appears later (not first), so fallback mode may miss real occurrences and under-apply edits.
💡 Minimal safe fix
- if !is_document_in_package_scope(&doc.text, &package_name) {
- continue;
- }
+ // Do not pre-filter by first package declaration here.
+ // Per-occurrence package filtering below is the source of truth.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/perl-parser/src/refactor/workspace_refactor.rs` around lines 873 -
875, The pre-filter check using is_document_in_package_scope(&doc.text,
&package_name) prematurely skips documents that contain the target package later
in the file; update the loop so this check is skipped when running in fallback
mode (i.e., only apply the continue when NOT in fallback). Locate the loop in
workspace_refactor.rs that currently does if
!is_document_in_package_scope(&doc.text, &package_name) { continue; } and change
it to conditional on the fallback flag (e.g., if !is_fallback &&
!is_document_in_package_scope(...) { continue; }) so fallback searches whole
files and does not drop valid matches.
| while let Some(found) = before[search_pos..].find("package ") { | ||
| let package_start = search_pos + found + "package ".len(); | ||
| let rest = &before[package_start..]; | ||
| let package_end = rest | ||
| .char_indices() | ||
| .find_map(|(idx, ch)| (ch == ';' || ch == '{' || ch.is_whitespace()).then_some(idx)) | ||
| .unwrap_or(rest.len()); | ||
| let candidate = &rest[..package_end]; | ||
| if !candidate.is_empty() { | ||
| package = Some(candidate.to_string()); | ||
| } | ||
| search_pos = package_start; | ||
| } |
There was a problem hiding this comment.
find_package_at_offset can match non-declaration text.
Line 1023 searches raw text for "package " anywhere, which can misclassify scope if that token appears outside actual declarations.
💡 Safer declaration-only scan
fn find_package_at_offset(text: &str, offset: usize) -> Option<String> {
- let before = &text[..offset.min(text.len())];
- let mut package = None;
- let mut search_pos = 0usize;
-
- while let Some(found) = before[search_pos..].find("package ") {
- let package_start = search_pos + found + "package ".len();
- let rest = &before[package_start..];
- let package_end = rest
- .char_indices()
- .find_map(|(idx, ch)| (ch == ';' || ch == '{' || ch.is_whitespace()).then_some(idx))
- .unwrap_or(rest.len());
- let candidate = &rest[..package_end];
- if !candidate.is_empty() {
- package = Some(candidate.to_string());
- }
- search_pos = package_start;
- }
-
- package
+ let mut package: Option<String> = None;
+ let mut consumed = 0usize;
+ let limit = offset.min(text.len());
+
+ for line in text.split_inclusive('\n') {
+ if consumed >= limit {
+ break;
+ }
+ let line_text = line.strip_suffix('\n').unwrap_or(line);
+ let trimmed = line_text.trim_start();
+ if let Some(rest) = trimmed.strip_prefix("package ") {
+ let end_idx = rest
+ .char_indices()
+ .find_map(|(idx, ch)| (ch == ';' || ch == '{' || ch.is_whitespace()).then_some(idx))
+ .unwrap_or(rest.len());
+ let candidate = &rest[..end_idx];
+ if !candidate.is_empty() {
+ package = Some(candidate.to_string());
+ }
+ }
+ consumed += line.len();
+ }
+
+ package
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while let Some(found) = before[search_pos..].find("package ") { | |
| let package_start = search_pos + found + "package ".len(); | |
| let rest = &before[package_start..]; | |
| let package_end = rest | |
| .char_indices() | |
| .find_map(|(idx, ch)| (ch == ';' || ch == '{' || ch.is_whitespace()).then_some(idx)) | |
| .unwrap_or(rest.len()); | |
| let candidate = &rest[..package_end]; | |
| if !candidate.is_empty() { | |
| package = Some(candidate.to_string()); | |
| } | |
| search_pos = package_start; | |
| } | |
| fn find_package_at_offset(text: &str, offset: usize) -> Option<String> { | |
| let mut package: Option<String> = None; | |
| let mut consumed = 0usize; | |
| let limit = offset.min(text.len()); | |
| for line in text.split_inclusive('\n') { | |
| if consumed >= limit { | |
| break; | |
| } | |
| let line_text = line.strip_suffix('\n').unwrap_or(line); | |
| let trimmed = line_text.trim_start(); | |
| if let Some(rest) = trimmed.strip_prefix("package ") { | |
| let end_idx = rest | |
| .char_indices() | |
| .find_map(|(idx, ch)| (ch == ';' || ch == '{' || ch.is_whitespace()).then_some(idx)) | |
| .unwrap_or(rest.len()); | |
| let candidate = &rest[..end_idx]; | |
| if !candidate.is_empty() { | |
| package = Some(candidate.to_string()); | |
| } | |
| } | |
| consumed += line.len(); | |
| } | |
| package | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/perl-parser/src/refactor/workspace_refactor.rs` around lines 1023 -
1035, The loop in find_package_at_offset scans the raw string `before` for the
substring "package " and can match it inside comments/strings or
mid-identifiers; change the logic to only accept real declarations by verifying
the match is a standalone token and not inside a comment/string: after locating
`found` compute the index of the character before `package` and ensure it is
either start-of-file or a delimiter (whitespace, newline, semicolon, '{', '}',
or other non-identifier char), and also perform a minimal comment/string guard
around `package` (e.g., scan backwards for unclosed quotes or comment markers or
use an existing tokenizer) before computing `package_start` and setting `package
= Some(candidate.to_string())`; update references in the loop (variables
`before`, `search_pos`, `package_start`, `candidate`) accordingly so only
declaration occurrences are accepted.
| # Linux x86_64 (glibc) | ||
| VERSION=0.12.4 | ||
| TARGET=x86_64-unknown-linux-gnu | ||
|
|
||
| curl -LO "https://github.com/EffortlessMetrics/perl-lsp/releases/download/v${VERSION}/perllsp-${VERSION}-${TARGET}.tar.gz" | ||
| tar xzf "perllsp-${VERSION}-${TARGET}.tar.gz" | ||
| sudo install -m 0755 "perllsp-${VERSION}-${TARGET}/perllsp" /usr/local/bin/perllsp | ||
|
|
||
| # For other targets, download the matching perllsp-<version>-<target> archive. | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check perl-lsp releases information
# Using web-based approach since we can access external APIs
# First, let's try to get release information from GitHub API
curl -s "https://api.github.com/repos/EffortlessMetrics/perl-lsp/releases" | head -200Repository: EffortlessMetrics/perl-lsp
Length of output: 11411
🌐 Web query:
EffortlessMetrics perl-lsp latest release version GitHub
💡 Result:
Latest EffortlessMetrics/perl-lsp GitHub release version (as of 2026-04-28): v0.12.4. The project’s GitHub releases page lists “Perl LSP 0.12.4” and the corresponding tag page “v0.12.4”, showing it as the newest release line.
Citations:
- 1: https://github.com/EffortlessMetrics/perl-lsp/releases
- 2: https://github.com/EffortlessMetrics/perl-lsp/releases/tag/v0.12.4
Consider using a version placeholder for improved maintainability.
The documentation currently hardcodes VERSION=0.12.4, which is the current latest release. However, this approach requires documentation updates whenever new releases are published. While the asset naming convention (perllsp-${VERSION}-${TARGET}.tar.gz) is correct, using a placeholder with instructions to reference the GitHub Releases page would reduce maintenance burden.
Suggested improvement
-# Linux x86_64 (glibc)
-VERSION=0.12.4
-TARGET=x86_64-unknown-linux-gnu
+# Linux x86_64 (glibc)
+# Replace <version> with the desired release from https://github.com/EffortlessMetrics/perl-lsp/releases
+VERSION=<version>
+TARGET=x86_64-unknown-linux-gnu📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Linux x86_64 (glibc) | |
| VERSION=0.12.4 | |
| TARGET=x86_64-unknown-linux-gnu | |
| curl -LO "https://github.com/EffortlessMetrics/perl-lsp/releases/download/v${VERSION}/perllsp-${VERSION}-${TARGET}.tar.gz" | |
| tar xzf "perllsp-${VERSION}-${TARGET}.tar.gz" | |
| sudo install -m 0755 "perllsp-${VERSION}-${TARGET}/perllsp" /usr/local/bin/perllsp | |
| # For other targets, download the matching perllsp-<version>-<target> archive. | |
| ``` | |
| # Linux x86_64 (glibc) | |
| # Replace <version> with the desired release from https://github.com/EffortlessMetrics/perl-lsp/releases | |
| VERSION=<version> | |
| TARGET=x86_64-unknown-linux-gnu | |
| curl -LO "https://github.com/EffortlessMetrics/perl-lsp/releases/download/v${VERSION}/perllsp-${VERSION}-${TARGET}.tar.gz" | |
| tar xzf "perllsp-${VERSION}-${TARGET}.tar.gz" | |
| sudo install -m 0755 "perllsp-${VERSION}-${TARGET}/perllsp" /usr/local/bin/perllsp | |
| # For other targets, download the matching perllsp-<version>-<target> archive. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/EDITORS/VS_CODE_SETUP.md` around lines 63 - 72, The docs hardcode
VERSION=0.12.4 which forces updates on each release; change the snippet to use a
clear placeholder (e.g., VERSION=<latest-release>) and add a one-line note
directing readers to check the GitHub Releases page for the actual version,
keeping TARGET and the asset pattern perllsp-${VERSION}-${TARGET}.tar.gz
unchanged so the commands still work when users substitute the real version.
| fn resolve_base_ref(base: &str, root: &Path) -> Result<String> { | ||
| let verify = cmd("git", &["rev-parse", "--verify", base]) | ||
| let mut candidates = Vec::new(); | ||
| if base != "auto" { | ||
| candidates.push(base.to_string()); | ||
| } | ||
| // NOTE: HEAD is intentionally excluded from the fallback chain. | ||
| // Using HEAD as a base ref causes `git diff HEAD...HEAD` to return an | ||
| // empty file list, which silently reports zero changed files and causes | ||
| // all CI lanes to be skipped — a false-negative worse than an error. | ||
| candidates.extend( | ||
| ["origin/main", "origin/master", "main", "master", "HEAD~1"] | ||
| .into_iter() | ||
| .map(str::to_string), | ||
| ); | ||
|
|
||
| for candidate in candidates { | ||
| if git_ref_exists(&candidate, root)? { | ||
| if base != "auto" && candidate != base { | ||
| eprintln!("Warning: base ref '{}' not found; using fallback '{}'", base, candidate); | ||
| } | ||
| return Ok(candidate); | ||
| } | ||
| } | ||
|
|
||
| Err(color_eyre::eyre::eyre!( | ||
| "Could not resolve a valid base ref from '{}', origin/main, origin/master, main, master, or HEAD~1. \ | ||
| Ensure the repository has at least one commit and the remote is reachable.", | ||
| base | ||
| )) | ||
| } | ||
|
|
||
| fn git_ref_exists(candidate: &str, root: &Path) -> Result<bool> { | ||
| let verify = cmd("git", &["rev-parse", "--verify", candidate]) | ||
| .dir(root) | ||
| .stdout_null() | ||
| .stderr_null() | ||
| .unchecked() | ||
| .run() | ||
| .context("Failed to run git rev-parse")?; | ||
|
|
||
| if verify.status.success() { | ||
| return Ok(base.to_string()); | ||
| } | ||
|
|
||
| eprintln!("Warning: base ref '{}' not found; falling back to HEAD~1", base); | ||
| Ok("HEAD~1".to_string()) | ||
| Ok(verify.status.success()) |
There was a problem hiding this comment.
ci-scope --base auto no longer mirrors the branch CI uses.
The fallback chain prefers origin/main before origin/master, while pr-smoke still runs ci-scope against origin/master on Line 127 of .github/workflows/ci.yml. If both refs exist, the same diff can produce different base values and different lane selection locally vs CI. Please resolve the repository’s default remote branch first, then fall back to named refs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/ci_scope.rs` around lines 750 - 789, The fallback order can
produce different bases locally vs CI; update resolve_base_ref to first try the
remote's default branch (resolve origin/HEAD to origin/<branch>) before the
hardcoded list: call git rev-parse --abbrev-ref origin/HEAD (or read
refs/remotes/origin/HEAD) to compute a candidate like format!("origin/{}",
branch) and check it with git_ref_exists(&candidate, root), then proceed to the
existing candidates only if that remote-default candidate is missing; keep the
existing warning logic and preserve git_ref_exists for verification.
| // For failing cargo test gates, extract the first failure details | ||
| let first_failure = if status == "fail" && is_cargo_test_command(command) { | ||
| parse_first_failure(&stdout, exit_code) | ||
| } else { | ||
| None | ||
| }; |
There was a problem hiding this comment.
Parse first-failure details for late failing tests too.
Lines 1045-1048 only populate first_failure when status == "fail". Here timeout is assigned from elapsed wall time after the process exits, so a non-zero cargo test that runs long still has usable failure output but loses the parsed details entirely.
Suggested fix
- let first_failure = if status == "fail" && is_cargo_test_command(command) {
+ let first_failure = if exit_code != 0 && is_cargo_test_command(command) {
parse_first_failure(&stdout, exit_code)
} else {
None
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // For failing cargo test gates, extract the first failure details | |
| let first_failure = if status == "fail" && is_cargo_test_command(command) { | |
| parse_first_failure(&stdout, exit_code) | |
| } else { | |
| None | |
| }; | |
| // For failing cargo test gates, extract the first failure details | |
| let first_failure = if exit_code != 0 && is_cargo_test_command(command) { | |
| parse_first_failure(&stdout, exit_code) | |
| } else { | |
| None | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/gates.rs` around lines 1044 - 1049, The code only calls
parse_first_failure when status == "fail", losing parsed test failure details
for cargo test runs that timed out or otherwise exited but still produced
failure output; change the condition that sets first_failure to call
parse_first_failure whenever is_cargo_test_command(command) is true and the
process produced output (e.g., stdout not empty) and either status == "fail" or
the run timed out (use the existing timeout/elapsed indicator), e.g. replace the
if condition with one that checks is_cargo_test_command(command) && (status ==
"fail" || timeout) (and guard with stdout presence) so
parse_first_failure(&stdout, exit_code) runs for late failing tests as well.
| // --- Pass 2: find the first "panicked at" line and the message line after it --- | ||
| for (idx, line) in lines.iter().enumerate() { | ||
| let trimmed = line.trim(); | ||
|
|
||
| // Rust ≥1.73: `panicked at src/lib.rs:42:5:` | ||
| // Rust <1.73: `panicked at 'message', src/lib.rs:42:5` | ||
| // | ||
| // The line may be prefixed with thread info: | ||
| // `thread 'test::name' panicked at src/lib.rs:42:5:` | ||
| // or may start directly with `panicked at`: | ||
| // `panicked at src/lib.rs:42:5:` | ||
| // | ||
| // We find the `panicked at ` substring anywhere in the line. | ||
| if let Some(panic_pos) = trimmed.find("panicked at ") { | ||
| let rest = &trimmed[panic_pos + "panicked at ".len()..]; | ||
|
|
||
| // Try new-style first (Rust ≥1.73), then fall back to old-style (Rust <1.73) | ||
| let parsed_site = | ||
| parse_panic_site_new_style(rest).or_else(|| parse_panic_site_old_style(rest)); | ||
|
|
||
| site = parsed_site; | ||
|
|
||
| // The message is the next non-empty line | ||
| message = lines[idx + 1..] | ||
| .iter() | ||
| .find(|l| !l.trim().is_empty()) | ||
| .map(|l| l.trim().to_string()); | ||
|
|
||
| break; | ||
| } | ||
| } | ||
|
|
||
| // Only return Some if we found at least one piece of useful info | ||
| if test_name.is_some() || site.is_some() { | ||
| Some(FirstFailure { test: test_name, site, message, exit_code }) | ||
| } else { | ||
| None |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Did Rust panic output before 1.73 include the panic message inline as panicked at 'message', file:line:col, while Rust 1.73+ switched to panicked at file:line:col: with the message on the following line?
💡 Result:
Yes, Rust panic output before 1.73 included the panic message inline as panicked at 'message', file:line:col, while Rust 1.73+ switched to panicked at file:line:col: with the message on the following line.
Citations:
- 1: Change default panic handler message format. rust-lang/rust#112849
- 2: https://blog.rust-lang.org/2023/10/05/Rust-1.73.0/
- 3: Cleaner assert_eq! & assert_ne! panic messages rust-lang/rust#111071
🏁 Script executed:
# First, let's read the specific lines in question
cat -n xtask/src/tasks/gates.rs | sed -n '1360,1420p'Repository: EffortlessMetrics/perl-lsp
Length of output: 2805
🏁 Script executed:
# Also, let's find the parse_panic_* functions to understand their signatures
rg "fn parse_panic" xtask/src/tasks/gates.rs -A 3Repository: EffortlessMetrics/perl-lsp
Length of output: 495
🏁 Script executed:
# Let's also check the doc comment for the FirstFailure struct to verify the claim
rg "struct FirstFailure" xtask/src/tasks/gates.rs -B 10 -A 3Repository: EffortlessMetrics/perl-lsp
Length of output: 637
🏁 Script executed:
# Let's see the complete parse_panic_site_old_style function
cat -n xtask/src/tasks/gates.rs | sed -n '1420,1450p'Repository: EffortlessMetrics/perl-lsp
Length of output: 1584
🏁 Script executed:
# Let's check the full doc comment for FirstFailure
cat -n xtask/src/tasks/gates.rs | sed -n '1335,1360p'Repository: EffortlessMetrics/perl-lsp
Length of output: 1219
🏁 Script executed:
# See the rest of parse_panic_site_old_style
cat -n xtask/src/tasks/gates.rs | sed -n '1444,1460p'Repository: EffortlessMetrics/perl-lsp
Length of output: 942
🏁 Script executed:
# Find the FirstFailure struct definition with its field documentation
rg "pub struct FirstFailure" xtask/src/tasks/gates.rs -A 20Repository: EffortlessMetrics/perl-lsp
Length of output: 845
Extract the inline panic message for Rust <1.73 format.
The code at lines 1396-1400 always takes the next non-empty line as message, which is correct for Rust ≥1.73 (panicked at <site>: with message on next line) but wrong for Rust <1.73 (panicked at '<message>', <site>). For old-style, this captures the wrong line instead of extracting the inline message from the rest variable. Additionally, the guard at line 1407 requires site to be present, discarding cases where only message is extracted—valid data loss for incomplete failures.
Suggested fix
- let parsed_site =
- parse_panic_site_new_style(rest).or_else(|| parse_panic_site_old_style(rest));
-
- site = parsed_site;
-
- // The message is the next non-empty line
- message = lines[idx + 1..]
- .iter()
- .find(|l| !l.trim().is_empty())
- .map(|l| l.trim().to_string());
+ if let Some((inline_message, parsed_site)) = parse_panic_old_style(rest) {
+ site = parsed_site;
+ message = Some(inline_message);
+ } else {
+ site = parse_panic_site_new_style(rest);
+ message = lines[idx + 1..]
+ .iter()
+ .find(|l| !l.trim().is_empty())
+ .map(|l| l.trim().to_string());
+ }
break;
}
}
- if test_name.is_some() || site.is_some() {
+ if test_name.is_some() || site.is_some() || message.is_some() {
Some(FirstFailure { test: test_name, site, message, exit_code })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/gates.rs` around lines 1374 - 1410, The code always reads the
next non-empty line into message, which is wrong for old-style panics; change
the logic after parsing to: if parse_panic_site_new_style(rest) returned
Some(site) keep the existing behavior (message is next non-empty line), but if
parse_panic_site_old_style(rest) returned Some(site) instead, extract the inline
message from rest (the single-quoted string before the comma) and set message to
that (unescape quotes as appropriate) rather than reading the following line;
finally, relax the final presence check so FirstFailure is constructed if any of
test_name, site, or message is Some (not just site), ensuring we don’t drop
cases where only message was extracted. Reference: parse_panic_site_new_style,
parse_panic_site_old_style, rest, lines, message, site, test_name, FirstFailure.
| pub fn is_cargo_test_command(command: &str) -> bool { | ||
| let mut tokens = command.split_whitespace(); | ||
| let first = tokens.next().unwrap_or(""); | ||
| // Allow for path-prefixed cargo (e.g. `/usr/local/bin/cargo`) | ||
| let is_cargo = first == "cargo" || first.ends_with("/cargo") || first.ends_with("\\cargo"); | ||
| is_cargo && tokens.next().is_some_and(|t| t == "test") |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Show all gate commands that invoke cargo.
rg -n '^\s*command:\s*.*cargo' .ci/gate-policy.yaml
# Any match here means the current matcher will miss a real gate command shape.
rg -n '^\s*command:\s*.*(cargo\s+\+[^[:space:]]+\s+test|cargo\s+--[^[:space:]]+\s+test|[A-Z_][A-Z0-9_]*=[^[:space:]]+\s+cargo\s+test)' .ci/gate-policy.yamlRepository: EffortlessMetrics/perl-lsp
Length of output: 703
🏁 Script executed:
# First, find the file and read the function implementation
cat -n xtask/src/tasks/gates.rs | sed -n '1450,1480p'Repository: EffortlessMetrics/perl-lsp
Length of output: 1498
🏁 Script executed:
# Find all calls to is_cargo_test_command to understand how it's used
rg -n "is_cargo_test_command" --type rsRepository: EffortlessMetrics/perl-lsp
Length of output: 97
🏁 Script executed:
# Look at the broader context to understand the function's purpose
cat -n xtask/src/tasks/gates.rs | sed -n '1450,1500p'Repository: EffortlessMetrics/perl-lsp
Length of output: 2459
🏁 Script executed:
rg -n "is_cargo_test_command" xtask/src/tasks/gates.rsRepository: EffortlessMetrics/perl-lsp
Length of output: 1358
🏁 Script executed:
# Search for where this function is called from
rg -n "is_cargo_test_command" --glob "*.rs"Repository: EffortlessMetrics/perl-lsp
Length of output: 1758
Enhance is_cargo_test_command to handle cargo flags before the subcommand.
The function currently assumes test is always the second token, which rejects valid Cargo invocations like cargo +nightly test or cargo --locked test. However, no actual commands in .ci/gate-policy.yaml use these patterns, and the function's documented scope correctly describes what it currently handles. If broader support for cargo invocation variants becomes necessary, scan forward to find the first test token after detecting a cargo command, rather than checking a fixed token position.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/gates.rs` around lines 1460 - 1465, The current
is_cargo_test_command assumes "test" is the immediate next token after the cargo
executable; update is_cargo_test_command to, after detecting the cargo
executable (first token matching "cargo" or ending with "/cargo" or "\\cargo"),
advance through subsequent tokens and return true if any token equals "test"
(skip flags like "+nightly" or "--locked"), otherwise false; keep the existing
path-prefixed cargo detection and use the tokens iterator from
command.split_whitespace() to scan forward for the first "test" token instead of
checking a fixed position.
| /// Resolve a git ref, using fallbacks when the primary ref is not valid. | ||
| fn resolve_base_ref(base: &str) -> Result<String> { | ||
| let verify = cmd("git", &["rev-parse", "--verify", base]) | ||
| .stdout_null() | ||
| .stderr_null() | ||
| .unchecked() | ||
| .run() | ||
| .context("Failed to run git rev-parse")?; | ||
|
|
||
| if verify.status.success() { | ||
| return Ok(base.to_string()); | ||
| let mut candidates = Vec::new(); | ||
| if base != "auto" { | ||
| candidates.push(base.to_string()); | ||
| } | ||
|
|
||
| eprintln!("Warning: Base ref '{}' not found; falling back to HEAD~1", base); | ||
|
|
||
| let fallback = "HEAD~1"; | ||
| let verify_fallback = cmd("git", &["rev-parse", "--verify", fallback]) | ||
| .stdout_null() | ||
| .stderr_null() | ||
| .unchecked() | ||
| .run() | ||
| .context("Failed to run git rev-parse for fallback")?; | ||
|
|
||
| if verify_fallback.status.success() { | ||
| return Ok(fallback.to_string()); | ||
| // NOTE: HEAD is intentionally excluded from the fallback chain. | ||
| // Using HEAD as a base ref causes `git diff HEAD...HEAD` to return an | ||
| // empty file list, which silently reports zero changed files and causes | ||
| // all targeted checks to be skipped — a false-negative worse than an error. | ||
| candidates.extend( | ||
| ["origin/main", "origin/master", "main", "master", "HEAD~1"] | ||
| .into_iter() | ||
| .map(str::to_string), | ||
| ); | ||
|
|
||
| for candidate in candidates { | ||
| let verify = cmd("git", &["rev-parse", "--verify", &candidate]) | ||
| .stdout_null() | ||
| .stderr_null() | ||
| .unchecked() | ||
| .run() | ||
| .context("Failed to run git rev-parse")?; | ||
| if verify.status.success() { | ||
| if base != "auto" && candidate != base { | ||
| eprintln!("Warning: Base ref '{}' not found; using fallback '{}'", base, candidate); | ||
| } | ||
| return Ok(candidate); | ||
| } | ||
| } | ||
|
|
||
| Err(eyre!("Could not resolve a valid base ref (tried '{}' and '{}')", base, fallback,)) | ||
| Err(eyre!( | ||
| "Could not resolve a valid base ref from '{}', origin/main, origin/master, main, master, or HEAD~1. \ | ||
| Ensure the repository has at least one commit and the remote is reachable.", | ||
| base | ||
| )) | ||
| } |
There was a problem hiding this comment.
--base auto can diff against a different branch than CI.
This resolver now prefers origin/main over origin/master, but this repo’s CI still scopes against origin/master (see .github/workflows/ci.yml, Line 127). On clones that have both refs, targeted-checks can compute a different changed-package set than CI. Resolve the remote default branch first (for example via origin/HEAD) before falling back to hard-coded branch names.
…le_watcher_debouncer_coalesces_bulk_operations (#7070)
e09e5bc to
9fff188
Compare
Motivation
perllsponPATHfor IDE users, which is incorrect for the OpenVSX extension auto-download flow.lsp.jsonin different paths.Description
docs/EDITORS/KIRO_SETUP.mdto split Kiro IDE vs Kiro CLI setup, document the OpenVSXEffortlessMetrics.perl-lsp-rsextension for IDE users, and show optional manualperllspconfiguration.docs/how-to/EDITOR_SETUP.mdto reflect the OpenVSX-first guidance for Kiro and the IDE/CLI distinction.docs/reference/CONFIG.mdto fix the executable naming toperllspand added an Amazon Kiro editor-specific snippet covering both IDE and CLI models.docs/how-to/TROUBLESHOOTING.mdthat documents/code initusage, how to find the generatedlsp.jsonpath, restart/status/log commands, and a caveat that client-initiated LSP operations may be limited for custom/non-built-in languages.Testing
git diff --checkwhich returned no issues.make_prhelper successfully.just pr-fastbut it failed in this environment becausejustis not installed (manual verification step noted).Codex Task