Keep WebSocket reconnects alive#2513
Keep WebSocket reconnects alive#2513hogeheer499-commits wants to merge 1 commit intopingdotgg:mainfrom
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ApprovabilityVerdict: Needs human review Changes WebSocket reconnection from limited retries (~8 attempts) to infinite retries via You can customize Macroscope's approvability policy. Learn more. |
|
Adding context for the reconnect behavior change flagged by Macroscope. The intent is not to create an aggressive retry loop. The retry cadence still uses the existing exponential backoff and remains capped at Why I think this is worth human review rather than an automatic approval-only change: T3 Code is often used through LAN, tailnet, SSH-forwarded, or desktop-managed remote environments. In those setups, a server restart or short network interruption should recover without requiring the user to focus the tab or manually retry after the previous 8-attempt budget has been exhausted. Verification relevant to this risk:
If maintainers prefer a bounded version, I can adjust this to a larger finite retry window instead of |
What changed
Attempt Ninstead ofAttempt N/max.Why this should exist
This is a small reliability fix.
T3 Code is often used through LAN, tailnet, SSH-forwarded, or other remote endpoints. A temporary server restart or private-network drop should not leave an already connected client permanently stopped after a fixed retry budget. With the previous
Schedule.recurs(7)policy, the client could exhaust reconnect attempts and require focus/online/manual retry before trying again.Continuous retry is still bounded by the existing backoff cap, so repeated failures do not spin aggressively.
Important scope clarification from later incident debugging: this PR is not presented as the root-cause fix for one local
:3777reproducer where a custom hotpatch proxy healthcheck was restarting the proxy process. That local loop was fixed outside this repository. This PR only addresses the upstream web client's behavior after a real WebSocket/server/network drop has already happened.Scope
Validation
bun run --filter @t3tools/web test src/rpc/wsTransport.test.ts- 25/25 passedbun run --filter @t3tools/web test src/rpc/wsConnectionState.test.ts src/components/WebSocketConnectionSurface.logic.test.ts- 10/10 passedbun run --filter @t3tools/web test- 96 files, 995 tests passedbun run fmt- passedbun run lint- 0 errors, existing warnings onlybun run typecheck- 12/12 packages passed, existing Effect language-service messages only