Skip to content

ci(conf): update QuestDB client dependency references in Maven options#5

Open
RaphDal wants to merge 5 commits intomainfrom
rd_ci_client
Open

ci(conf): update QuestDB client dependency references in Maven options#5
RaphDal wants to merge 5 commits intomainfrom
rd_ci_client

Conversation

@RaphDal
Copy link
Copy Markdown
Collaborator

@RaphDal RaphDal commented Feb 3, 2026

No description provided.

@RaphDal RaphDal changed the title ci: update QuestDB client dependency references in Maven options ci(conf): update QuestDB client dependency references in Maven options Feb 3, 2026
jerrinot added a commit that referenced this pull request Mar 4, 2026
bluestreak01 added a commit that referenced this pull request Apr 27, 2026
Re-adds the volatile generation counter (and its companion retry loop in
flushPendingRows) that the cursor strip had removed. This is the
foundation the reconnect work (#20/#21) builds on — the producer needs a
way to detect that the wire-side actor has rotated state mid-encode so
it can discard now-poisoned schema-ID refs and re-encode with full
schema definitions.

What lands here:

  * QwpWebSocketSender: volatile connectionGeneration + lastSeenGeneration
    pair. Bumped on initial recovery from disk (the recovered FSNs were
    never seen by *this* server connection, so the first batch must
    re-publish full schemas). Reconnect path will bump in subsequent
    work.

  * flushPendingRows: encode-mid-reconnect retry loop. Sample gen before
    encode + after finishMessage; if it changed, discard the encoded
    bytes (table buffers haven't been reset yet — source rows are
    intact) and retry with reset schema state. Bounded at
    MAX_SCHEMA_RACE_RETRIES = 10 so reconnect-faster-than-encode
    surfaces a hard error instead of spinning.

  * CursorSendEngine.wasRecoveredFromDisk(): single-bit accessor the
    sender reads during ensureConnected to decide whether to bump.

  * SegmentRing.openExisting: filter out empty hot-spare leftovers
    (frameCount=0) from prior sessions. Those carry the provisional
    baseSeq=0 and would otherwise collide with the real baseSeq=0
    segment and trip the contiguity check. Surfaced by the new
    recovery test — caught a real bug in the recovery scan.

  * Test hooks bumpConnectionGenerationForTest / accessors for gen and
    maxSent*Id so reconnect-effect tests can run without spinning up
    the (still-not-implemented) reconnect path.

Tests cover: gen=0 for fresh connect, gen=1 after disk recovery, gen
bump triggers schema-state reset on the next encode and is sticky
(further flushes without bump don't re-reset).

Spec decisions #4 and #5 land here.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
bluestreak01 added a commit that referenced this pull request Apr 27, 2026
The cursor I/O loop previously treated any wire failure as terminal —
first disconnect = sender broken, every subsequent batch threw. Now,
when the sender wires a ReconnectFactory + ReconnectListener, a wire
failure triggers:

  1. WARN log
  2. Build a fresh WebSocketClient via the factory (same auth/TLS/host)
  3. Reset wire state: nextWireSeq=0, fsnAtZero = engine.ackedFsn() + 1
  4. Reposition the cursor at the first unacked FSN (replay)
  5. Notify the listener → producer's connectionGeneration bumps so
     the next encode emits full schema definitions, not refs the new
     server has never seen
  6. Outer ioLoop continues — nextWireSeq=0 starts on the new wire,
     trySendOne picks up at the repositioned cursor and replays every
     unacked frame, then continues with whatever the producer publishes
     next

Added in main:
  * CursorWebSocketSendLoop.ReconnectFactory + .ReconnectListener
    interfaces (both functional, both null-able for legacy "fail-fast"
    semantics)
  * positionCursorAt(fsn) — walks frames inside the segment containing
    fsn to find the byte offset
  * SegmentRing.findSegmentContaining(fsn) + CursorSendEngine
    pass-through — used by the cursor reposition
  * QwpWebSocketSender extracts buildAndConnect() to use both for the
    initial connect and as the reconnect factory; onWireReconnect()
    is the listener that bumps connectionGeneration

This commit covers the *mechanics* (one attempt, succeed-or-fail).
The follow-up commit adds policy: exponential backoff with jitter,
per-outage time cap (reconnect_max_duration_millis, default 300s
per spec decision #2), and auth-failure detection (401/403/non-101
treated as terminal so the retry budget isn't wasted on errors that
won't fix themselves).

Two integration tests:
  * testReconnectAfterServerInducedDisconnect — server ACKs then
    closes; sender reconnects, second batch lands on the new wire
  * testReplayResendsUnackedFramesAcrossReconnect — server receives
    the first frame WITHOUT ACKing then closes; sender reconnects
    and replays the unacked frame on the new connection

Spec decisions #5 (encode-mid-reconnect race) and the core of
#1/#2 (reconnect mechanics) land here.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants