added scale tests for scaleUp and scaleDown#606
Draft
oleg-kushniriov wants to merge 1 commit into
Draft
Conversation
0afcadb to
16deff0
Compare
16deff0 to
71d2dc5
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What type of PR is this?
/kind feature
What this PR does / why we need it:
Adds an e2e scale benchmark suite under
operator/e2e/tests/scale/that measures the marginal cost of growing and shrinking a runningPodCliqueSetby patchingspec.replicas. Each scenario isolates a single resize event on a steady-state cluster so the timeline captures onlythe controller's incremental work, not cold-start setup or teardown.
Grove's existing scale benchmarks (
Test_ScaleTest_1000,Test_ScaleTest_5000_Deletion) exercise the full lifecycle (deploy → ready → delete) — they're a good guard against regressions in cold-start and cascade-delete, but they miss the day-2 path that production users hit mostoften: changing
spec.replicason an already-running PCS.Scenarios (six benchmarks + two sanity variants)
Scale-up:
ScaleUp_TinyScaleUp_FromZeroScaleUp_SmallDeltaScaleUp_LargeDeltaScale-down:
ScaleDown_TinyPodsAtCountConditionend-to-end on a small dev cluster.ScaleDown_ToZeroTest_ScaleTest_5000_Deletionat a smaller, finer-grained scale.ScaleDown_SmallDeltaScaleDown_LargeDeltaWhat's in the PR
operator/e2e/tests/scale/scale_up_test.go—Test_ScaleUp_Tiny,Test_ScaleUp_FromZero,Test_ScaleUp_SmallDelta,Test_ScaleUp_LargeDelta, plus sharedrunScaleUpTesthelper andscaleUpVariantstruct.operator/e2e/tests/scale/scale_down_test.go—Test_ScaleDown_Tiny,Test_ScaleDown_ToZero,Test_ScaleDown_SmallDelta,Test_ScaleDown_LargeDelta, plus sharedrunScaleDownTesthelper andscaleDownVariantstruct.operator/e2e/yaml/:scale-up-{tiny,from-zero,small-delta,large-delta}.yaml,scale-down-{tiny,to-zero,small-delta,large-delta}.yaml. Each encodes the initial replica count; the test patchesspec.replicasto the target.PodsAtCountConditioninoperator/e2e/measurement/condition/pod.go— fires when live pod count drops to ≤ target. Required for scale-down because the existingPodsCreatedConditionis≥-only and would fire immediately when the starting count alreadyexceeds the target.
workerNodesoverride onscaleUpVariant/scaleDownVariantso the tiny sanity tests can run on smaller dev clusters (5 nodes) while the real benchmarks keep the 30-node default.make run-scale-test TEST_PATTERN=<pattern>support — mirrors the existingrun-e2econvention, no new targets.Test shape (every scenario)
initial-pods-created+initial-pods-ready(skipped when initial=0).spec.replicasto target; this is the phase pprof/metrics windows isolate. Up-side milestones:all-pods-created+all-pods-ready. Down-side milestone:pods-at-target(the new condition).pcs-deleted.All scale tests reuse the existing
runScaleTestscaffolding inscale_test.go, so output format (stdout summary +scale-test-results.json), pprof capture, and Grove metadata export match the rest of the scale suite.How to run
Which issue(s) this PR fixes:
Fixes #604
Special notes for your reviewer:
Test_*_Tinyvariants are intentionally part of the suite, not debug-only — they're sanity checks for the test plumbing itself (cluster, KWOK stages, newPodsAtCountCondition) and complete in seconds. Useful when iterating on the scale fixtures or after clusterchanges.
PodsAtCountConditionuses≤rather than==so a transient overshoot during cascade-delete between two consecutive polls doesn't make the milestone miss its trigger.scaleDownWorkerNodesandscaleUpWorkerNodesare intentionally set to 30 (vs.defaultScaleWorkerNodes = 100) — these tests are about controller throughput on the day-2 path, not scheduler capacity. ~1100 KWOK pods on 30 nodes is ~37 pods/node, well under the 110-pod kubeletdefault.
TEST_PATTERNtorun-scale-testonly; the previous default behavior (run everything intests/scale/) is unchanged.Does this PR introduce a API change?
Additional documentation e.g., enhancement proposals, usage docs, etc.: