Skip to content

chore: various fixes, improvements for optimization package#135

Open
andrewklatzke wants to merge 4 commits intoaklatzke/AIC-1793/output-name-optionfrom
aklatze/AIC-2178/verify-runs-endpoint
Open

chore: various fixes, improvements for optimization package#135
andrewklatzke wants to merge 4 commits intoaklatzke/AIC-1793/output-name-optionfrom
aklatze/AIC-2178/verify-runs-endpoint

Conversation

@andrewklatzke
Copy link
Copy Markdown
Contributor

@andrewklatzke andrewklatzke commented Apr 15, 2026

Requirements

  • I have added test coverage for new or changed functionality
  • I have followed the repository's pull request submission guidelines
  • I have validated my changes against all supported platform versions

Describe the solution you've provided

Provides some fixes and improvements to the optimization method after some QA-ing on my end.

  • ensures tools are captured and passed in through the final payload; don't get lost when posting back to LD
  • better matching for model configs (defaulting to global models as those will be by far the most used; custom keys will still work for custom models)
  • fixes how the state updating works, previously were getting less results than expected for "Ground truth" iterations (only the last result, not intermediaries)
  • fixed a bug where the validation phase was advancing the "iteration" and blowing up the iteration number unnecessarily
  • late addition: adds a shared dataclass for the LLM call so that they share the same call signature (same fn can be provided to both)
  • late addition: removes context_choices as a necessary arg, defaults to an anonymous context
  • late addition: creates a protocol class for the config and LLMCallConfig so that the same handler can be used for both if the user wants

Describe alternatives you've considered

These are fixes for inconsistencies/bugs in the package. No alternatives considered.


Note

Medium Risk
Touches core optimization run lifecycle and LaunchDarkly API persistence semantics (new PATCH flow, iteration/validation folding), so regressions could affect UI progress reporting and stored run data; changes are well-covered by expanded tests.

Overview
Fixes result persistence for optimize_from_config runs by creating one result record per logical iteration (POST) and incrementally updating it (PATCH) with status, telemetry (latency/tokens), scores (including judge thresholds), and variation metadata, while also auto-closing prior iterations so ground-truth sample records aren’t left stale.

Corrects validation-phase behavior by folding validation sub-iterations back into the parent iteration’s persisted record and ensuring terminal events (success/failure/turn completed) use the main iteration context (avoids mismatched userInput vs completionResponse).

Improves auto-commit payloads by retaining the original tool keys from the fetched agent variation and by resolving modelConfigKey via a new _find_model_config helper that prefers global model configs; also prefetches model configs once and patches the final result with createdVariationKey after commit.

Adds LLMCallConfig/LLMCallContext Protocols (exported in __init__) so one handler can be used for both agent and judge calls, renames judge context variables to current_variables, and makes context_choices optional (defaults to an anonymous context).

Reviewed by Cursor Bugbot for commit 8f3468f. Bugbot is set up for automated code reviews on this repo. Configure here.

@andrewklatzke andrewklatzke requested a review from a team as a code owner April 15, 2026 17:54
…ype, remove required context_choices argument and default to anon
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 55674ae. Configure here.

Comment thread packages/optimization/src/ldai_optimization/client.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants