-
Notifications
You must be signed in to change notification settings - Fork 66
Add 100-item limit to getSandboxesMetrics and batch client requests
#290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 2 commits
11c80b6
68b7f71
64cd74d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,14 +1,17 @@ | ||
| 'use client' | ||
|
|
||
| import { useQuery } from '@tanstack/react-query' | ||
| import { useQueries } from '@tanstack/react-query' | ||
| import { useEffect, useMemo, useRef } from 'react' | ||
| import { SANDBOXES_METRICS_POLLING_MS } from '@/configs/intervals' | ||
| import type { Sandboxes } from '@/core/modules/sandboxes/models' | ||
| import type { ClientSandboxesMetrics } from '@/core/modules/sandboxes/models.client' | ||
| import { areStringArraysEqual } from '@/lib/utils/array' | ||
| import { useTRPC } from '@/trpc/client' | ||
| import { useDashboard } from '../../../context' | ||
| import { useSandboxMetricsStore } from '../stores/metrics-store' | ||
|
|
||
| const MAX_SANDBOX_IDS_PER_REQUEST = 100 | ||
|
|
||
| interface UseSandboxesMetricsProps { | ||
| sandboxes: Sandboxes | ||
| pollingIntervalMs?: number | ||
|
|
@@ -31,6 +34,14 @@ function useStableSandboxIdsWhileScrolling( | |
| return activeSandboxIdsRef.current | ||
| } | ||
|
|
||
| function chunkArray<T>(array: T[], size: number): T[][] { | ||
| const chunks: T[][] = [] | ||
| for (let i = 0; i < array.length; i += size) { | ||
| chunks.push(array.slice(i, i + size)) | ||
| } | ||
| return chunks | ||
| } | ||
|
|
||
| export function useSandboxesMetrics({ | ||
| sandboxes, | ||
| pollingIntervalMs = SANDBOXES_METRICS_POLLING_MS, | ||
|
|
@@ -54,28 +65,49 @@ export function useSandboxesMetrics({ | |
| const metricsRefetchInterval = | ||
| pollingIntervalMs > 0 ? pollingIntervalMs : false | ||
|
|
||
| const metricsQueryInput = useMemo( | ||
| () => ({ | ||
| teamSlug: team.slug, | ||
| sandboxIds: activeSandboxIds, | ||
| }), | ||
| [activeSandboxIds, team.slug] | ||
| const sandboxIdChunks = useMemo( | ||
| () => chunkArray(activeSandboxIds, MAX_SANDBOX_IDS_PER_REQUEST), | ||
| [activeSandboxIds] | ||
| ) | ||
|
|
||
| const { data } = useQuery( | ||
| trpc.sandboxes.getSandboxesMetrics.queryOptions(metricsQueryInput, { | ||
| enabled: shouldEnableMetricsQuery, | ||
| refetchInterval: metricsRefetchInterval, | ||
| refetchOnWindowFocus: false, | ||
| refetchOnMount: false, | ||
| refetchOnReconnect: true, | ||
| refetchIntervalInBackground: false, | ||
| }) | ||
| ) | ||
| const { metrics: mergedMetrics } = useQueries({ | ||
| queries: sandboxIdChunks.map((chunk) => | ||
| trpc.sandboxes.getSandboxesMetrics.queryOptions( | ||
| { | ||
| teamSlug: team.slug, | ||
| sandboxIds: chunk, | ||
| }, | ||
| { | ||
| enabled: shouldEnableMetricsQuery, | ||
| refetchInterval: metricsRefetchInterval, | ||
| refetchOnWindowFocus: false, | ||
| refetchOnMount: false, | ||
| refetchOnReconnect: true, | ||
| refetchIntervalInBackground: false, | ||
| } | ||
| ) | ||
| ), | ||
| combine: (results) => { | ||
| const hasError = results.some((r) => r.isError) | ||
| if (hasError) { | ||
| return { metrics: undefined } | ||
| } | ||
|
|
||
| const merged: ClientSandboxesMetrics = {} | ||
| for (const result of results) { | ||
| if (result.data?.metrics) { | ||
| Object.assign(merged, result.data.metrics) | ||
| } | ||
| } | ||
| return { | ||
| metrics: Object.keys(merged).length > 0 ? merged : undefined, | ||
| } | ||
| }, | ||
| }) | ||
|
|
||
| useEffect(() => { | ||
| if (data?.metrics) { | ||
| setMetrics(data.metrics) | ||
| if (mergedMetrics) { | ||
| setMetrics(mergedMetrics) | ||
| } | ||
| }, [data, setMetrics]) | ||
| }, [mergedMetrics, setMetrics]) | ||
| } | ||
|
Comment on lines
+94
to
112
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🔴 When one or more batch chunks fail (e.g., a transient network error), only the successful chunks contribute to Extended reasoning...What the bug is and how it manifests The new batching logic issues N parallel chunk queries via The specific code path
const mergedMetrics = { ...state.metrics, ...metrics }This is a merge, not a replace. Sandbox IDs from the failed chunk still exist as keys in Why existing code doesn't prevent it The Impact Users with >100 active sandboxes (the only ones affected by this PR) will see a metrics table where some rows show up-to-date CPU/memory data and others silently show outdated data from an earlier fetch. There is no staleness badge, spinner, or error message. This is worse than the pre-PR behavior, where a single query failure left ALL rows consistently stale—at least then the polling retry would uniformly correct the situation. How to fix Option A (safest): Before calling const hasError = queryResults.some((r) => r.isError)
if (mergedMetrics && \!hasError) {
setMetrics(mergedMetrics)
}Option B (better UX): Surface an error/staleness indicator when Step-by-step proof
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed — moved the merge logic into
Comment on lines
+77
to
112
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🔴 The new Extended reasoning...The bug: The cascade this triggers (lines 78–107):
Step-by-step proof with polling enabled:
Why the previous code didn't have this problem: The old How to fix it: Use the const mergedMetrics = useQueries({
queries: sandboxIdChunks.map((chunk) =>
trpc.sandboxes.getSandboxesMetrics.queryOptions({ teamSlug: team.slug, sandboxIds: chunk }, { ... })
),
combine: (results) => {
const merged: ClientSandboxesMetrics = {}
for (const result of results) {
if (result.data?.metrics) Object.assign(merged, result.data.metrics)
}
return Object.keys(merged).length > 0 ? merged : undefined
},
})With this change, the
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. devin, is this still valid?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, this is no longer valid — the comment was on the first commit which used |
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we put this in a shared constant inside the sandboxes core/module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done — moved
MAX_SANDBOX_IDS_PER_REQUESTtosrc/core/modules/sandboxes/schemas.tsand imported it in both the tRPC router and the client hook.