fix: redesign dashboard update workflow to handle API rate limit exhaustion#5368
fix: redesign dashboard update workflow to handle API rate limit exhaustion#5368princerajpoot20 wants to merge 4 commits into
Conversation
✅ Deploy Preview for asyncapi-website ready!Built without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify project configuration. |
📝 WalkthroughWalkthroughAdds retry/backoff and adaptive rate-limit throttling to GitHub GraphQL dashboard collection, bounds pagination, separates hot-discussions and good-first-issues collection for partial success, and makes dashboard generation non-fatal in the workflow. Documentation, tests, and dashboard data updated accordingly. Changes
Sequence Diagram(s)sequenceDiagram
participant Script as Dashboard Script
participant Retry as Retry & Throttle Layer
participant GH as GitHub GraphQL API
participant RateLimit as Rate Limit Monitor
participant FS as File System
Script->>Retry: Fetch hot discussions (with cutoff)
Retry->>RateLimit: Check remaining points
RateLimit-->>Retry: remaining
Retry->>Retry: Compute adaptiveDelay
Retry->>GH: GraphQL query (paginated, updatedSince)
alt Page returned
GH-->>Retry: Data page (hasNextPage?)
Retry-->>Script: Page results
else Retryable error
GH-->>Retry: 502 / secondary limit / network error
Retry->>Retry: Exponential backoff & retry
else Non-retryable error
GH-->>Retry: Error
Retry-->>Script: Propagate dataset failure
end
Script->>Retry: Fetch good-first-issues (independent)
Retry->>GH: GraphQL query (paginated)
Retry->>Retry: backoff/adaptiveDelay as needed
GH-->>Retry: Data or error
alt Both datasets succeed
Script->>FS: Write full dashboard.json
else One succeeds
Script->>FS: Write partial dashboard.json
Script-->>Script: Emit warning
else Both fail
Script-->>Script: Throw error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
⚡️ Lighthouse report for the changes in this PR:
Lighthouse ran on https://deploy-preview-5368--asyncapi-website.netlify.app/ |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #5368 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 22 22
Lines 830 883 +53
Branches 159 169 +10
=========================================
+ Hits 830 883 +53 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/dashboard/build-dashboard.test.ts (1)
22-30:⚠️ Potential issue | 🟡 MinorRemove unused fixture imports.
ESLint and SonarCloud both flag
mockMediumRateLimitResponse(line 28) andmockRateLimitResponse(line 29) as unused. The newadaptiveDelaytest callsadaptiveDelay({...})with inline rate-limit objects rather than the fixtures, so these imports (and arguably the medium-rate fixture itself intests/fixtures/dashboardData.ts) are dead.🧹 Proposed cleanup
import { discussionWithMoreComments, fullDiscussionDetails, issues, - mockDiscussion, - mockHealthyRateLimitResponse, - mockMediumRateLimitResponse, - mockRateLimitResponse + mockDiscussion, + mockHealthyRateLimitResponse } from '../fixtures/dashboardData';You can either delete
mockMediumRateLimitResponsefrom the fixtures file or wire it into theshould apply adaptive delay based on rate limit remainingtest instead of the inline objects.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/dashboard/build-dashboard.test.ts` around lines 22 - 30, The imports mockMediumRateLimitResponse and mockRateLimitResponse are unused in build-dashboard.test.ts; either remove these two imports from the import list or update the adaptiveDelay test to use the fixtures instead of inline objects (replace the inline rate-limit objects with mockMediumRateLimitResponse and/or mockRateLimitResponse). If you choose removal, also delete the dead fixture mockMediumRateLimitResponse from the fixtures source to avoid lingering unused data; if you choose reuse, update the test that calls adaptiveDelay(...) to pass the fixture variables instead of the inline literals.
🧹 Nitpick comments (11)
scripts/dashboard/issue-queries.ts (2)
286-292:Object.freeze(...)was removed — runtime immutability is now lost.The previous
Querieswas wrapped inObject.freeze, which prevented accidental mutation at runtime. The new plain object literal allows any importer to reassign, e.g.,Queries.goodFirstIssues = '...', with no runtime guard. There's no behavioral reason to drop it; restore it to keep the same safety guarantee.🛡️ Proposed fix
-export const Queries = { +export const Queries = Object.freeze({ pullRequestById, issueById, goodFirstIssues, hotDiscussionsIssues, hotDiscussionsPullRequests -}; +});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/dashboard/issue-queries.ts` around lines 286 - 292, The Queries export lost runtime immutability by removing Object.freeze; restore it by exporting a frozen object so callers cannot mutate properties. Locate the Queries object (export const Queries = { pullRequestById, issueById, goodFirstIssues, hotDiscussionsIssues, hotDiscussionsPullRequests }) and wrap the object in Object.freeze(...) before exporting (so Queries.goodFirstIssues, etc. cannot be reassigned at runtime).
149-156:updatedSinceis interpolated unsanitized into the GraphQL search string.Right now the only caller is
getHotDiscussionsCutoffDate()which always returns aYYYY-MM-DDstring, so this is safe in practice. But the function signature accepts anystring, and a future caller passing something containing"or whitespace could break out of the search expression (or in the worst case craft a different filter than intended). Two cheap mitigations:
- Add a guard at the top of each function:
if (!/^\d{4}-\d{2}-\d{2}$/.test(updatedSince)) throw new Error(...).- Add an explicit return type
: stringso the surface is unambiguous.Same applies to
hotDiscussionsPullRequestsat line 213.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/dashboard/issue-queries.ts` around lines 149 - 156, The GraphQL search string in hotDiscussionsIssues (and similarly hotDiscussionsPullRequests) interpolates updatedSince unsanitized; add a validation guard at the top of each function (e.g., test updatedSince against /^\d{4}-\d{2}-\d{2}$/ and throw a clear Error on mismatch) to prevent injection/formatting issues, and annotate the function signatures with an explicit return type of string to make the API surface unambiguous; note that callers like getHotDiscussionsCutoffDate() will continue to work but invalid inputs will now be rejected early.tests/dashboard/build-dashboard.test.ts (3)
66-69: Use the static import instead ofrequire()(ESLintglobal-require).ESLint flags lines 67, 102, and 459 with
global-require. You already need a handle onpauseto clear/inspect it — import it once at the top and reuse.♻️ Proposed change
+import { pause } from '../../scripts/helpers/utils'; ... beforeEach(() => { jest.resetAllMocks(); consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); consoleLogSpy = jest.spyOn(console, 'log').mockImplementation(() => {}); - // Re-apply pause mock after resetAllMocks - const { pause } = require('../../scripts/helpers/utils'); - (pause as jest.Mock).mockResolvedValue(undefined); }); ... it('should apply adaptive delay based on rate limit remaining', async () => { - const { pause } = require('../../scripts/helpers/utils'); - await adaptiveDelay(...);Apply the same in the
retryWithBackoffbeforeEachat line 459.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/dashboard/build-dashboard.test.ts` around lines 66 - 69, Replace the dynamic require for the pause helper with a static import at the top of the test file and reuse that binding where needed (instead of calling require('../../scripts/helpers/utils') inside the test); specifically import the named export pause once and then call (pause as jest.Mock).mockResolvedValue(undefined) in the reset/re-apply sections (including the retryWithBackoff beforeEach), so ESLint global-require violations for pause are removed and the same mocked instance is reused across tests.
144-182: DuplicatemakePageResponsehelper.The two
makePageResponsedefinitions at lines 145–151 and 165–171 are byte-identical (SonarCloud also flags this). Hoist a single helper to the suite scope.♻️ Proposed dedup
+ const makePageResponse = (page: number, hasNext: boolean) => ({ + search: { + nodes: [{ ...mockDiscussion, id: `test-id-${page}` }], + pageInfo: { hasNextPage: hasNext, endCursor: `cursor${page}` } + }, + rateLimit: { remaining: 1000, limit: 5000, cost: 1, resetAt: new Date().toISOString() } + }); + it('should respect maxPages parameter', async () => { - const makePageResponse = (page: number, hasNext: boolean) => ({ ... }); mockedGraphql .mockResolvedValueOnce(makePageResponse(1, true)) ... }); it('should not limit pages when maxPages is 0', async () => { - const makePageResponse = (page: number, hasNext: boolean) => ({ ... }); mockedGraphql .mockResolvedValueOnce(makePageResponse(1, true)) ... });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/dashboard/build-dashboard.test.ts` around lines 144 - 182, Duplicate makePageResponse helper exists in two tests; hoist a single shared helper to the suite scope so both test cases reuse it: remove the inner definitions in the two it blocks and define one const makePageResponse = (page: number, hasNext: boolean) => {...} at the top of the describe/test file (above the it cases) so mockedGraphql, getDiscussions and the tests refer to the same function; ensure the helper shape remains identical to what the tests expect (search.nodes, pageInfo, rateLimit).
184-192: Reset env state in case prior test deletedGITHUB_TOKEN.
should throw error when GITHUB_TOKEN is not set(line 387) deletesprocess.env.GITHUB_TOKENand re-sets it at the end, but if any test added between it and the newshould throw when both hot discussions and good first issues failever fails before the restore, this test (and others) would throw a "GitHub token is not set" error rather than the expected "Dashboard generation failed" error — masking the real assertion. DefensivebeforeEach/afterEachto restoreprocess.env.GITHUB_TOKEN = 'test-token'would make these tests order-independent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/dashboard/build-dashboard.test.ts` around lines 184 - 192, The test suite leaves process.env.GITHUB_TOKEN deleted by another test which can cause this test ('should throw when both hot discussions and good first issues fail') to fail with the wrong error; add a beforeEach (or beforeAll) that sets process.env.GITHUB_TOKEN = 'test-token' and an afterEach that restores any original value (or sets it back to 'test-token') so tests are order-independent; update tests in build-dashboard.test.ts (referencing the tests named "should throw when both hot discussions and good first issues fail" and "should throw error when GITHUB_TOKEN is not set") to ensure the GH token is always present unless a specific test explicitly deletes it.docs/DASHBOARD_GITHUB_DATA_COLLECTION.md (2)
93-145: Add languages to fenced code blocks (markdownlint MD040).The illustrative request/response and tree-style blocks at lines 93, 104, 115, and 135 lack a language hint. Use
text(orgraphqlwhere applicable) to suppress the lint warning and improve syntax highlighting on rendered docs.-``` +```text "Give me the first 30 open issues in the asyncapi org"<details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/DASHBOARD_GITHUB_DATA_COLLECTION.mdaround lines 93 - 145, The fenced
code blocks showing example requests/responses (e.g., the blocks containing
"Give me the first 30 open issues in the asyncapi org", "Give me the next 30
issues, starting after cursor Y3Vyc29yOjMw", and "Give me the next 30, starting
after cursor Y3Vyc29yOjYw") lack a language hint and trigger markdownlint MD040;
update each of those triple-backtick blocks to use a language tag (use text for
plain examples and graphql where appropriate) so they read like ```text or
209-212: Backoff schedule wording is slightly off.The doc says retries wait "60 seconds, then 120 seconds, then 240 seconds", which matches
RETRY_BASE_DELAY_MS * 2 ** attemptforattempt = 0,1,2. That's correct, but consider clarifying that these are the delays before retries 1, 2, 3 (not after the final retry) — readers occasionally interpret the third wait as a wait after failure. Same nit applies to the "Retry schedule" list at lines 264–268.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/DASHBOARD_GITHUB_DATA_COLLECTION.md` around lines 209 - 212, The backoff wording is ambiguous: clarify that the listed waits (60s, 120s, 240s) are the delays before retry attempts 1, 2, and 3 (i.e., the pause before each retry), not pauses after the final retry; update the "3. Retry with exponential backoff" paragraph and the "Retry schedule" list (the section showing RETRY_BASE_DELAY_MS * 2 ** attempt) to explicitly state "delay before retry 1/2/3" or "wait before each retry" and optionally show the mapping attempt -> delay (attempt=0 -> 60s, attempt=1 -> 120s, attempt=2 -> 240s) so readers cannot misinterpret the third wait as occurring after the final retry.
.github/workflows/regenerate-meetings-and-videos.yml (2)
45-53: Stale PR title — does not mentiondashboard.json.The commit message at line 49 was updated to include
dashboard.json, but the PR title at line 52 ('chore: update meetings.json and newsrooom_videos.json') was not. Worth aligning while you're touching this workflow (the existing typonewsrooomis also still here).📝 Proposed tweak
- commit-message: 'chore: update meetings.json, newsrooom_videos.json and dashboard.json' + commit-message: 'chore: update meetings.json, newsroom-videos.json and dashboard.json' committer: asyncapi-bot <info@asyncapi.io> author: asyncapi-bot <info@asyncapi.io> - title: 'chore: update meetings.json and newsrooom_videos.json' + title: 'chore: update meetings.json, newsroom-videos.json and dashboard.json'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/regenerate-meetings-and-videos.yml around lines 45 - 53, Update the workflow's create-pull-request inputs so the PR title and commit-message match and correct the typo: change the title value (currently 'chore: update meetings.json and newsrooom_videos.json') to include dashboard.json and fix "newsrooom" to "newsroom" and ensure the commit-message similarly reads "chore: update meetings.json, newsroom-videos.json and dashboard.json" so both fields are consistent.
42-43: Dashboard step swallows all failure modes, not just transient API errors.
npm run generate:dashboard || echo "::warning::..."masks every non-zero exit (script crash, missing npm script, syntax error, OOM, etc.) as a soft warning, not just the partial-success /Dashboard generation failed: unable to fetch any data from GitHubcase the script intentionally throws. Since the script already writesdashboard.jsonon partial success and only throws when both fetches fail, that "both failed" outcome will now silently be committed as well (no PR reviewer signal beyond a build annotation).If the intent is "don't block meetings/videos on API exhaustion", consider:
- Letting the script exit
0itself in the partial/exhaustion case (it already writes a file and logs warnings) and not wrapping the step at all, so genuine bugs still fail the workflow loudly.- Or, in addition to the warning, surface the failure in the PR title/body so maintainers notice it instead of silently merging stale data.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/regenerate-meetings-and-videos.yml around lines 42 - 43, The workflow step currently hides all failures by appending "|| echo \"::warning::Dashboard generation failed, continuing with available data\"" to the command that runs the npm script "generate:dashboard"; remove that trailing "|| echo ..." so the step fails loudly on real errors, and instead modify the "generate:dashboard" script (the Node script invoked by npm run generate:dashboard) to explicitly exit 0 when it detects the intended partial/exhaustion case (the existing logic that writes dashboard.json and logs warnings), while still exiting non‑zero for genuine crashes (missing script, uncaught exceptions, syntax errors, OOM); ensure the warning string and the npm script name are the touchpoints you change.
scripts/dashboard/build-dashboard.ts (2)
139-175:getDiscussionsrecursion is unbounded forgoodFirstIssues.
start()callsgetDiscussions(Queries.goodFirstIssues, PAGE_SIZE)without amaxPagesargument, somaxPagesdefaults to0(unlimited). Today that's fine (good-first-issue count is small), but combined with the recursive implementation a future spike could cause deep promise chains and memory pressure (each frame retains the previousresult.search.nodesarray via.concat). Converting this to an iterativewhile (hasNextPage)loop is a cheap, future-proof change that also makes the page-cap logic clearer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/dashboard/build-dashboard.ts` around lines 139 - 175, The getDiscussions function uses unbounded recursion (called by start() with getDiscussions(Queries.goodFirstIssues, PAGE_SIZE) where maxPages defaults to 0), which can cause deep promise chains and memory pressure due to repeated .concat; refactor getDiscussions to an iterative loop: initialize an accumulator array, loop while hasNextPage is true and currentPage < maxPages (treat maxPages===0 as unlimited), call retryWithBackoff/graphql and adaptiveDelay inside each iteration, push result.search.nodes into the accumulator (avoid creating new arrays with .concat), update endCursor and currentPage each loop, and finally return the accumulated nodes; keep the same signature and behavior for getDiscussions so start()/Queries.goodFirstIssues/PAGE_SIZE callers are unchanged.
381-396: Consolidate error logging calls: pass error as second argument tologger.error.Two back-to-back
logger.error('...')calls break correlation in structured logging backends. The codebase consistently uses the patternlogger.error('message', error)elsewhere (e.g.,check-locales.ts,check-markdown.ts,build-tools.ts). Update both error handlers to:- hotDiscussionsFailed = true; - logger.error('Failed to fetch hot discussions:'); - logger.error(error); + hotDiscussionsFailed = true; + logger.error('Failed to fetch hot discussions:', error); ... - goodFirstIssuesFailed = true; - logger.error('Failed to fetch good first issues:'); - logger.error(error); + goodFirstIssuesFailed = true; + logger.error('Failed to fetch good first issues:', error);Update the test assertions at lines 190–191 to match the established pattern in other test files:
- expect(logger.error).toHaveBeenCalledWith('Failed to fetch hot discussions:'); + expect(logger.error).toHaveBeenCalledWith('Failed to fetch hot discussions:', expect.any(Error)); - expect(logger.error).toHaveBeenCalledWith('Failed to fetch good first issues:'); + expect(logger.error).toHaveBeenCalledWith('Failed to fetch good first issues:', expect.any(Error));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/dashboard/build-dashboard.ts` around lines 381 - 396, The two catch blocks currently log errors with two separate logger.error calls; change both to consolidate into a single call passing the error as the second argument (e.g., replace the pair of logger.error('Failed to fetch hot discussions:'); logger.error(error); with logger.error('Failed to fetch hot discussions:', error) and do the same for 'Failed to fetch good first issues:') so structured logging preserves correlation; update the corresponding test assertions (the ones asserting logger.error was called) to expect the message and the error object as the two arguments (matching the project's existing pattern used elsewhere).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/DASHBOARD_GITHUB_DATA_COLLECTION.md`:
- Around line 286-293: Update the documentation snippet to match the actual
GitHub Actions workflow fallback: replace the current example `npm run
generate:dashboard || echo "Dashboard generation failed"` with the real command
used in the workflow (`npm run generate:dashboard || echo "::warning::Dashboard
generation failed, continuing with available data"`), so the doc aligns with the
`npm run generate:dashboard` fallback behavior and the workflow’s GitHub Actions
annotation message.
In `@scripts/dashboard/build-dashboard.ts`:
- Around line 80-94: adaptiveDelay currently can wait until resetAt (up to ~1
hour) and can pass NaN to pause; update adaptiveDelay to (1) validate
resetAt/resetTime using isFinite and fall back to a short safe delay (e.g., 5s)
if invalid, and (2) cap the computed waitMs when remaining <= 100 with a maximum
(e.g., Math.min(waitMs, 15 * 60_000)) so the runner isn’t blocked for an hour;
keep using pause(waitMs) and update the logger message to include that the wait
was capped when applicable.
- Around line 364-385: Split the single try/catch into two independent fetch
blocks so partial results aren't discarded: call getDiscussions with
Queries.hotDiscussionsIssues(...) inside its own try/catch and similarly call
getDiscussions with Queries.hotDiscussionsPullRequests(...) in a separate
try/catch, storing results into local arrays (e.g., issues and PRs) and setting
hotDiscussionsFailed (or separate flags like hotIssuesFailed/hotPRsFailed) when
each fetch fails; then concat the successful arrays into discussions and pass
that combined list to getHotDiscussions(discussions), assign to hotDiscussions,
and update logging with counts and detailed error logs via logger.error(error)
in each catch.
---
Outside diff comments:
In `@tests/dashboard/build-dashboard.test.ts`:
- Around line 22-30: The imports mockMediumRateLimitResponse and
mockRateLimitResponse are unused in build-dashboard.test.ts; either remove these
two imports from the import list or update the adaptiveDelay test to use the
fixtures instead of inline objects (replace the inline rate-limit objects with
mockMediumRateLimitResponse and/or mockRateLimitResponse). If you choose
removal, also delete the dead fixture mockMediumRateLimitResponse from the
fixtures source to avoid lingering unused data; if you choose reuse, update the
test that calls adaptiveDelay(...) to pass the fixture variables instead of the
inline literals.
---
Nitpick comments:
In @.github/workflows/regenerate-meetings-and-videos.yml:
- Around line 45-53: Update the workflow's create-pull-request inputs so the PR
title and commit-message match and correct the typo: change the title value
(currently 'chore: update meetings.json and newsrooom_videos.json') to include
dashboard.json and fix "newsrooom" to "newsroom" and ensure the commit-message
similarly reads "chore: update meetings.json, newsroom-videos.json and
dashboard.json" so both fields are consistent.
- Around line 42-43: The workflow step currently hides all failures by appending
"|| echo \"::warning::Dashboard generation failed, continuing with available
data\"" to the command that runs the npm script "generate:dashboard"; remove
that trailing "|| echo ..." so the step fails loudly on real errors, and instead
modify the "generate:dashboard" script (the Node script invoked by npm run
generate:dashboard) to explicitly exit 0 when it detects the intended
partial/exhaustion case (the existing logic that writes dashboard.json and logs
warnings), while still exiting non‑zero for genuine crashes (missing script,
uncaught exceptions, syntax errors, OOM); ensure the warning string and the npm
script name are the touchpoints you change.
In `@docs/DASHBOARD_GITHUB_DATA_COLLECTION.md`:
- Around line 93-145: The fenced code blocks showing example requests/responses
(e.g., the blocks containing "Give me the first 30 open issues in the asyncapi
org", "Give me the next 30 issues, starting after cursor Y3Vyc29yOjMw", and
"Give me the next 30, starting after cursor Y3Vyc29yOjYw") lack a language hint
and trigger markdownlint MD040; update each of those triple-backtick blocks to
use a language tag (use text for plain examples and graphql where appropriate)
so they read like ```text or ```graphql to suppress the lint warning and enable
proper syntax highlighting.
- Around line 209-212: The backoff wording is ambiguous: clarify that the listed
waits (60s, 120s, 240s) are the delays before retry attempts 1, 2, and 3 (i.e.,
the pause before each retry), not pauses after the final retry; update the "3.
Retry with exponential backoff" paragraph and the "Retry schedule" list (the
section showing RETRY_BASE_DELAY_MS * 2 ** attempt) to explicitly state "delay
before retry 1/2/3" or "wait before each retry" and optionally show the mapping
attempt -> delay (attempt=0 -> 60s, attempt=1 -> 120s, attempt=2 -> 240s) so
readers cannot misinterpret the third wait as occurring after the final retry.
In `@scripts/dashboard/build-dashboard.ts`:
- Around line 139-175: The getDiscussions function uses unbounded recursion
(called by start() with getDiscussions(Queries.goodFirstIssues, PAGE_SIZE) where
maxPages defaults to 0), which can cause deep promise chains and memory pressure
due to repeated .concat; refactor getDiscussions to an iterative loop:
initialize an accumulator array, loop while hasNextPage is true and currentPage
< maxPages (treat maxPages===0 as unlimited), call retryWithBackoff/graphql and
adaptiveDelay inside each iteration, push result.search.nodes into the
accumulator (avoid creating new arrays with .concat), update endCursor and
currentPage each loop, and finally return the accumulated nodes; keep the same
signature and behavior for getDiscussions so
start()/Queries.goodFirstIssues/PAGE_SIZE callers are unchanged.
- Around line 381-396: The two catch blocks currently log errors with two
separate logger.error calls; change both to consolidate into a single call
passing the error as the second argument (e.g., replace the pair of
logger.error('Failed to fetch hot discussions:'); logger.error(error); with
logger.error('Failed to fetch hot discussions:', error) and do the same for
'Failed to fetch good first issues:') so structured logging preserves
correlation; update the corresponding test assertions (the ones asserting
logger.error was called) to expect the message and the error object as the two
arguments (matching the project's existing pattern used elsewhere).
In `@scripts/dashboard/issue-queries.ts`:
- Around line 286-292: The Queries export lost runtime immutability by removing
Object.freeze; restore it by exporting a frozen object so callers cannot mutate
properties. Locate the Queries object (export const Queries = { pullRequestById,
issueById, goodFirstIssues, hotDiscussionsIssues, hotDiscussionsPullRequests })
and wrap the object in Object.freeze(...) before exporting (so
Queries.goodFirstIssues, etc. cannot be reassigned at runtime).
- Around line 149-156: The GraphQL search string in hotDiscussionsIssues (and
similarly hotDiscussionsPullRequests) interpolates updatedSince unsanitized; add
a validation guard at the top of each function (e.g., test updatedSince against
/^\d{4}-\d{2}-\d{2}$/ and throw a clear Error on mismatch) to prevent
injection/formatting issues, and annotate the function signatures with an
explicit return type of string to make the API surface unambiguous; note that
callers like getHotDiscussionsCutoffDate() will continue to work but invalid
inputs will now be rejected early.
In `@tests/dashboard/build-dashboard.test.ts`:
- Around line 66-69: Replace the dynamic require for the pause helper with a
static import at the top of the test file and reuse that binding where needed
(instead of calling require('../../scripts/helpers/utils') inside the test);
specifically import the named export pause once and then call (pause as
jest.Mock).mockResolvedValue(undefined) in the reset/re-apply sections
(including the retryWithBackoff beforeEach), so ESLint global-require violations
for pause are removed and the same mocked instance is reused across tests.
- Around line 144-182: Duplicate makePageResponse helper exists in two tests;
hoist a single shared helper to the suite scope so both test cases reuse it:
remove the inner definitions in the two it blocks and define one const
makePageResponse = (page: number, hasNext: boolean) => {...} at the top of the
describe/test file (above the it cases) so mockedGraphql, getDiscussions and the
tests refer to the same function; ensure the helper shape remains identical to
what the tests expect (search.nodes, pageInfo, rateLimit).
- Around line 184-192: The test suite leaves process.env.GITHUB_TOKEN deleted by
another test which can cause this test ('should throw when both hot discussions
and good first issues fail') to fail with the wrong error; add a beforeEach (or
beforeAll) that sets process.env.GITHUB_TOKEN = 'test-token' and an afterEach
that restores any original value (or sets it back to 'test-token') so tests are
order-independent; update tests in build-dashboard.test.ts (referencing the
tests named "should throw when both hot discussions and good first issues fail"
and "should throw error when GITHUB_TOKEN is not set") to ensure the GH token is
always present unless a specific test explicitly deletes it.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d2725f65-25e1-4181-a221-e2ee09d921ca
📒 Files selected for processing (7)
.github/workflows/regenerate-meetings-and-videos.ymldashboard.jsondocs/DASHBOARD_GITHUB_DATA_COLLECTION.mdscripts/dashboard/build-dashboard.tsscripts/dashboard/issue-queries.tstests/dashboard/build-dashboard.test.tstests/fixtures/dashboardData.ts
| ### Layer 4: Non-blocking workflow step (handles complete failure) | ||
|
|
||
| If BOTH sections fail (no data at all), the script throws an error. But in the GitHub Actions workflow, the dashboard step uses a fallback: | ||
| ```yaml | ||
| npm run generate:dashboard || echo "Dashboard generation failed" | ||
| ``` | ||
|
|
||
| This means meetings and videos are still updated and committed even if the dashboard generation fails completely. The failure is logged as a warning in the workflow output. |
There was a problem hiding this comment.
Doc-vs-workflow drift on the exact echo command.
The doc shows npm run generate:dashboard || echo "Dashboard generation failed", but the workflow at .github/workflows/regenerate-meetings-and-videos.yml:43 actually emits a GitHub Actions annotation:
npm run generate:dashboard || echo "::warning::Dashboard generation failed, continuing with available data"Either is fine, but the doc snippet should match the real workflow so future readers don't try to "fix" the warning syntax.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/DASHBOARD_GITHUB_DATA_COLLECTION.md` around lines 286 - 293, Update the
documentation snippet to match the actual GitHub Actions workflow fallback:
replace the current example `npm run generate:dashboard || echo "Dashboard
generation failed"` with the real command used in the workflow (`npm run
generate:dashboard || echo "::warning::Dashboard generation failed, continuing
with available data"`), so the doc aligns with the `npm run generate:dashboard`
fallback behavior and the workflow’s GitHub Actions annotation message.
| async function adaptiveDelay(rateLimit: Discussion['rateLimit']): Promise<void> { | ||
| if (rateLimit.remaining <= 100) { | ||
| const resetTime = new Date(rateLimit.resetAt).getTime(); | ||
| const waitMs = Math.max(resetTime - Date.now(), 0) + 1000; | ||
|
|
||
| logger.warn( | ||
| `Rate limit critically low (${rateLimit.remaining} remaining). Waiting ${Math.round(waitMs / 1000)}s until reset.` | ||
| ); | ||
| await pause(waitMs); | ||
| } else if (rateLimit.remaining <= 500) { | ||
| await pause(5000); | ||
| } else { | ||
| await pause(BASE_DELAY_MS); | ||
| } | ||
| } |
There was a problem hiding this comment.
Adaptive delay can block the workflow for up to an hour.
When rateLimit.remaining <= 100, the script waits until resetAt, which can be close to a full hour. Combined with the recursive paginated fetch and the per-request retry budget (60s + 120s + 240s on transient failures), a single workflow run can sit idle for >70 minutes. Two suggestions:
- Cap
waitMs(e.g.,Math.min(waitMs, 15 * 60_000)) and let the next scheduled run pick up where this one left off, rather than holding the runner — partial output is already supported. - Guard against an invalid/
NaNresetAt(new Date('invalid').getTime()→NaN, andMath.max(NaN, 0) === NaN, which wouldpause(NaN + 1000)and hang/no-op depending on thepauseimpl).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/dashboard/build-dashboard.ts` around lines 80 - 94, adaptiveDelay
currently can wait until resetAt (up to ~1 hour) and can pass NaN to pause;
update adaptiveDelay to (1) validate resetAt/resetTime using isFinite and fall
back to a short safe delay (e.g., 5s) if invalid, and (2) cap the computed
waitMs when remaining <= 100 with a maximum (e.g., Math.min(waitMs, 15 *
60_000)) so the runner isn’t blocked for an hour; keep using pause(waitMs) and
update the logger message to include that the wait was capped when applicable.
| try { | ||
| const issues = (await getDiscussions(Queries.hotDiscussionsIssues, 20)) as HotDiscussionsIssuesNode[]; | ||
| const PRs = (await getDiscussions(Queries.hotDiscussionsPullRequests, 20)) as HotDiscussionsPullRequestsNode[]; | ||
| const rawGoodFirstIssues: GoodFirstIssues[] = await getDiscussions(Queries.goodFirstIssues, 20); | ||
| const issues = (await getDiscussions( | ||
| Queries.hotDiscussionsIssues(cutoffDate), | ||
| PAGE_SIZE, | ||
| null, | ||
| MAX_PAGES_HOT_DISCUSSIONS | ||
| )) as HotDiscussionsIssuesNode[]; | ||
| const PRs = (await getDiscussions( | ||
| Queries.hotDiscussionsPullRequests(cutoffDate), | ||
| PAGE_SIZE, | ||
| null, | ||
| MAX_PAGES_HOT_DISCUSSIONS | ||
| )) as HotDiscussionsPullRequestsNode[]; | ||
| const discussions = issues.concat(PRs); | ||
| const [hotDiscussions, goodFirstIssues] = await Promise.all([ | ||
| getHotDiscussions(discussions), | ||
| mapGoodFirstIssues(rawGoodFirstIssues) | ||
| ]); | ||
|
|
||
| await writeToFile({ hotDiscussions, goodFirstIssues }, writePath); | ||
| hotDiscussions = await getHotDiscussions(discussions); | ||
| logger.info(`Collected ${hotDiscussions.length} hot discussions`); | ||
| } catch (error) { | ||
| logger.error('There were some issues parsing data from github.'); | ||
| hotDiscussionsFailed = true; | ||
| logger.error('Failed to fetch hot discussions:'); | ||
| logger.error(error); | ||
| } |
There was a problem hiding this comment.
Hot-discussions try/catch coupling discards partial success.
Issues and PRs are fetched within the same try block. If Queries.hotDiscussionsIssues(...) succeeds but Queries.hotDiscussionsPullRequests(...) later fails, the already-fetched issues data is thrown away and hotDiscussions remains []. Given that the whole point of this PR is to keep partial fresh data, it's worth fetching the two halves with independent error handling and merging whatever succeeded before scoring/sorting.
♻️ Suggested split
- try {
- const issues = (await getDiscussions(
- Queries.hotDiscussionsIssues(cutoffDate),
- PAGE_SIZE,
- null,
- MAX_PAGES_HOT_DISCUSSIONS
- )) as HotDiscussionsIssuesNode[];
- const PRs = (await getDiscussions(
- Queries.hotDiscussionsPullRequests(cutoffDate),
- PAGE_SIZE,
- null,
- MAX_PAGES_HOT_DISCUSSIONS
- )) as HotDiscussionsPullRequestsNode[];
- const discussions = issues.concat(PRs);
-
- hotDiscussions = await getHotDiscussions(discussions);
- logger.info(`Collected ${hotDiscussions.length} hot discussions`);
- } catch (error) {
- hotDiscussionsFailed = true;
- logger.error('Failed to fetch hot discussions:');
- logger.error(error);
- }
+ let issues: HotDiscussionsIssuesNode[] = [];
+ let PRs: HotDiscussionsPullRequestsNode[] = [];
+ let issuesOk = false;
+ let prsOk = false;
+
+ try {
+ issues = (await getDiscussions(
+ Queries.hotDiscussionsIssues(cutoffDate),
+ PAGE_SIZE, null, MAX_PAGES_HOT_DISCUSSIONS
+ )) as HotDiscussionsIssuesNode[];
+ issuesOk = true;
+ } catch (error) {
+ logger.error('Failed to fetch hot-discussion issues:', error);
+ }
+
+ try {
+ PRs = (await getDiscussions(
+ Queries.hotDiscussionsPullRequests(cutoffDate),
+ PAGE_SIZE, null, MAX_PAGES_HOT_DISCUSSIONS
+ )) as HotDiscussionsPullRequestsNode[];
+ prsOk = true;
+ } catch (error) {
+ logger.error('Failed to fetch hot-discussion PRs:', error);
+ }
+
+ if (!issuesOk && !prsOk) {
+ hotDiscussionsFailed = true;
+ } else {
+ hotDiscussions = await getHotDiscussions(issues.concat(PRs));
+ logger.info(`Collected ${hotDiscussions.length} hot discussions`);
+ }🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 371-376: This assertion is unnecessary since it does not change the type of the expression.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/dashboard/build-dashboard.ts` around lines 364 - 385, Split the
single try/catch into two independent fetch blocks so partial results aren't
discarded: call getDiscussions with Queries.hotDiscussionsIssues(...) inside its
own try/catch and similarly call getDiscussions with
Queries.hotDiscussionsPullRequests(...) in a separate try/catch, storing results
into local arrays (e.g., issues and PRs) and setting hotDiscussionsFailed (or
separate flags like hotIssuesFailed/hotPRsFailed) when each fetch fails; then
concat the successful arrays into discussions and pass that combined list to
getHotDiscussions(discussions), assign to hotDiscussions, and update logging
with counts and detailed error logs via logger.error(error) in each catch.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/dashboard/build-dashboard.test.ts (1)
22-30:⚠️ Potential issue | 🟡 MinorRemove unused imports.
mockMediumRateLimitResponseandmockRateLimitResponseare imported but never referenced — onlymockHealthyRateLimitResponseis used in this file. ESLint and SonarCloud both flag these.🧹 Proposed fix
import { discussionWithMoreComments, fullDiscussionDetails, issues, mockDiscussion, - mockHealthyRateLimitResponse, - mockMediumRateLimitResponse, - mockRateLimitResponse + mockHealthyRateLimitResponse } from '../fixtures/dashboardData';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/dashboard/build-dashboard.test.ts` around lines 22 - 30, Remove the unused imports mockMediumRateLimitResponse and mockRateLimitResponse from the import list (leave discussionWithMoreComments, fullDiscussionDetails, issues, mockDiscussion, mockHealthyRateLimitResponse) in build-dashboard.test.ts so ESLint and SonarCloud warnings go away; update the import statement that currently references mockMediumRateLimitResponse and mockRateLimitResponse to only import the actually used symbols.
🧹 Nitpick comments (2)
tests/dashboard/build-dashboard.test.ts (1)
40-69: Optional: avoidrequire()calls inside tests.Lines 67, 110, and 437 use CommonJS
require('../../scripts/helpers/utils')to re-grab the mockedpauseafterjest.resetAllMocks()(which clears implementations). It works but tripsglobal-requireand duplicates the import-shape across test bodies. A cleaner pattern is to importpauseonce at the top alongside the existing imports and reset it on the same reference:♻️ Suggested pattern
import { logger } from '../../scripts/helpers/logger'; +import { pause } from '../../scripts/helpers/utils';beforeEach(() => { jest.resetAllMocks(); consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); consoleLogSpy = jest.spyOn(console, 'log').mockImplementation(() => {}); - // Re-apply pause mock after resetAllMocks - const { pause } = require('../../scripts/helpers/utils'); - (pause as jest.Mock).mockResolvedValue(undefined); });The
jest.mock('../../scripts/helpers/utils', ...)call is hoisted, so the importedpausealready points to the mock;resetAllMocksonly resets behavior, not the binding.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/dashboard/build-dashboard.test.ts` around lines 40 - 69, The tests re-require the module to get the mocked pause after calling jest.resetAllMocks(), which triggers global-require linting and duplicates import shape; instead import pause once at the top alongside other imports so the hoisted jest.mock provides the mocked binding, then after each jest.resetAllMocks() call cast that imported pause to jest.Mock and call mockResolvedValue(undefined) (i.e., replace the inline require('../../scripts/helpers/utils') uses with the top-level imported pause and reset its mock implementation after resetAllMocks()).scripts/dashboard/build-dashboard.ts (1)
391-395: Consider boundinggoodFirstIssuespagination too.
getDiscussions(Queries.goodFirstIssues, PAGE_SIZE)defaultsmaxPages = 0(unlimited), while hot discussions are capped atMAX_PAGES_HOT_DISCUSSIONS. In normal conditions this is fine, but combined withadaptiveDelaywaiting up to ~1 h on low remaining quota, an unbounded recursion here could dominate workflow runtime exactly when API capacity is already constrained. Adding a similar cap (e.g.,MAX_PAGES_GOOD_FIRST_ISSUES) keeps both sections symmetric and bounds worst-case runtime.♻️ Suggested change
-const MAX_PAGES_HOT_DISCUSSIONS = 5; +const MAX_PAGES_HOT_DISCUSSIONS = 5; + +const MAX_PAGES_GOOD_FIRST_ISSUES = 5;- const rawGoodFirstIssues: GoodFirstIssues[] = await getDiscussions(Queries.goodFirstIssues, PAGE_SIZE); + const rawGoodFirstIssues: GoodFirstIssues[] = await getDiscussions( + Queries.goodFirstIssues, + PAGE_SIZE, + null, + MAX_PAGES_GOOD_FIRST_ISSUES + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/dashboard/build-dashboard.ts` around lines 391 - 395, The call to getDiscussions(Queries.goodFirstIssues, PAGE_SIZE) in build-dashboard.ts can loop unbounded because getDiscussions defaults maxPages=0 (unlimited); change the call to pass a bounded maxPages (e.g., MAX_PAGES_GOOD_FIRST_ISSUES) similar to hot discussions so pagination is capped and worst-case runtime is bounded; create/declare the MAX_PAGES_GOOD_FIRST_ISSUES constant if missing and update the invocation near mapGoodFirstIssues/goodFirstIssues to getDiscussions(Queries.goodFirstIssues, PAGE_SIZE, MAX_PAGES_GOOD_FIRST_ISSUES).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/dashboard/build-dashboard.test.ts`:
- Around line 22-30: Remove the unused imports mockMediumRateLimitResponse and
mockRateLimitResponse from the import list (leave discussionWithMoreComments,
fullDiscussionDetails, issues, mockDiscussion, mockHealthyRateLimitResponse) in
build-dashboard.test.ts so ESLint and SonarCloud warnings go away; update the
import statement that currently references mockMediumRateLimitResponse and
mockRateLimitResponse to only import the actually used symbols.
---
Nitpick comments:
In `@scripts/dashboard/build-dashboard.ts`:
- Around line 391-395: The call to getDiscussions(Queries.goodFirstIssues,
PAGE_SIZE) in build-dashboard.ts can loop unbounded because getDiscussions
defaults maxPages=0 (unlimited); change the call to pass a bounded maxPages
(e.g., MAX_PAGES_GOOD_FIRST_ISSUES) similar to hot discussions so pagination is
capped and worst-case runtime is bounded; create/declare the
MAX_PAGES_GOOD_FIRST_ISSUES constant if missing and update the invocation near
mapGoodFirstIssues/goodFirstIssues to getDiscussions(Queries.goodFirstIssues,
PAGE_SIZE, MAX_PAGES_GOOD_FIRST_ISSUES).
In `@tests/dashboard/build-dashboard.test.ts`:
- Around line 40-69: The tests re-require the module to get the mocked pause
after calling jest.resetAllMocks(), which triggers global-require linting and
duplicates import shape; instead import pause once at the top alongside other
imports so the hoisted jest.mock provides the mocked binding, then after each
jest.resetAllMocks() call cast that imported pause to jest.Mock and call
mockResolvedValue(undefined) (i.e., replace the inline
require('../../scripts/helpers/utils') uses with the top-level imported pause
and reset its mock implementation after resetAllMocks()).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f5123863-c955-4eba-8913-1812c91728c1
📒 Files selected for processing (2)
scripts/dashboard/build-dashboard.tstests/dashboard/build-dashboard.test.ts
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
scripts/dashboard/build-dashboard.ts (1)
375-396:⚠️ Potential issue | 🟠 MajorHot-discussion partial success is still all-or-nothing.
The outer
trystill wraps bothQueries.hotDiscussionsIssues(...)andQueries.hotDiscussionsPullRequests(...). If one succeeds and the other gets rate-limited, the successful half is discarded andhotDiscussionsremains empty, which undercuts the new partial-data behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/dashboard/build-dashboard.ts` around lines 375 - 396, The current try/catch around both getDiscussions calls makes hotDiscussions all-or-nothing; split the calls so each (Queries.hotDiscussionsIssues and Queries.hotDiscussionsPullRequests) is wrapped in its own try/catch, append successful results to a local discussions array, set hotDiscussionsFailed if either fails, and only call getHotDiscussions(discussions) once with whatever partial data was collected; preserve logging via logger.error for each individual failure and logger.info for the final collected count using hotDiscussions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/dashboard/build-dashboard.ts`:
- Around line 206-215: The getDiscussionByID call currently uses
retryWithBackoff but bypasses the module's throttling/pacing and can be called
concurrently from processHotDiscussions; change getDiscussionByID to enqueue
requests through the same throttler used elsewhere (or reuse the existing
pacedRequest / requestQueue helper) instead of calling graphql directly so
per-id fetches are rate-limited, and update callers (notably
processHotDiscussions) to await the throttled promise rather than spawning
Promise.all of raw fetches; keep the retryWithBackoff wrapper around the
throttled call and preserve the same arguments (Queries.pullRequestById /
Queries.issueById and headers) so behavior stays identical except for pacing.
---
Duplicate comments:
In `@scripts/dashboard/build-dashboard.ts`:
- Around line 375-396: The current try/catch around both getDiscussions calls
makes hotDiscussions all-or-nothing; split the calls so each
(Queries.hotDiscussionsIssues and Queries.hotDiscussionsPullRequests) is wrapped
in its own try/catch, append successful results to a local discussions array,
set hotDiscussionsFailed if either fails, and only call
getHotDiscussions(discussions) once with whatever partial data was collected;
preserve logging via logger.error for each individual failure and logger.info
for the final collected count using hotDiscussions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 233f476e-af4b-4bc9-bfe6-3174014ff85b
📒 Files selected for processing (5)
dashboard.jsonscripts/dashboard/build-dashboard.tsscripts/dashboard/issue-queries.tstests/dashboard/build-dashboard.test.tstests/fixtures/dashboardData.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- dashboard.json
- scripts/dashboard/issue-queries.ts
| return retryWithBackoff( | ||
| () => | ||
| graphql(isPR ? Queries.pullRequestById : Queries.issueById, { | ||
| id, | ||
| headers: { | ||
| authorization: `token ${token}` | ||
| } | ||
| }), | ||
| `getDiscussionByID ${id}` | ||
| ); |
There was a problem hiding this comment.
getDiscussionByID() still bypasses the new throttling path.
This call is retryable now, but it is still unpaced. In the same file, processHotDiscussions() can fan out several paginated detail lookups with Promise.all(...), so follow-up requests can still hit GitHub in bursts and trip secondary limits before backoff helps. Please throttle these by-id fetches too, or stop issuing them concurrently when extra comment pages are needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/dashboard/build-dashboard.ts` around lines 206 - 215, The
getDiscussionByID call currently uses retryWithBackoff but bypasses the module's
throttling/pacing and can be called concurrently from processHotDiscussions;
change getDiscussionByID to enqueue requests through the same throttler used
elsewhere (or reuse the existing pacedRequest / requestQueue helper) instead of
calling graphql directly so per-id fetches are rate-limited, and update callers
(notably processHotDiscussions) to await the throttled promise rather than
spawning Promise.all of raw fetches; keep the retryWithBackoff wrapper around
the throttled call and preserve the same arguments (Queries.pullRequestById /
Queries.issueById and headers) so behavior stays identical except for pacing.



Related #5333
Summary by CodeRabbit
New Features
Documentation
Chores
Tests