perf: raise server MAX_READ_SIZE to SFTP standard 255 KiB#197
Merged
inureyes merged 1 commit intolablup:mainfrom May 10, 2026
Merged
perf: raise server MAX_READ_SIZE to SFTP standard 255 KiB#197inureyes merged 1 commit intolablup:mainfrom
inureyes merged 1 commit intolablup:mainfrom
Conversation
The bssh-server hard-capped every SFTP `READ` reply at 64 KiB (`MAX_READ_SIZE = 65536`) regardless of what the client requested. `bssh-russh-sftp` and OpenSSH's `sftp-server` both use the SFTP standard `MAX_READ_LENGTH = 261120` (255 KiB) for request sizing, so a client asking for a 256 KiB chunk only ever got 64 KiB back, forcing it to issue four extra requests for the same byte stream. Bump `MAX_READ_SIZE` to `261120` so server replies match the standard chunk size used by the rest of the stack. Combined with client-side pipelining (lablup#196), this directly cuts the per-MiB request count on downloads from 16 → 4. Memory exposure stays bounded: handles are still capped at `MAX_HANDLES = 1000` per session and each in-flight read still uses a single per-request buffer of this size (max ~255 KiB × in-flight requests).
Member
|
LGTM |
inureyes
added a commit
that referenced
this pull request
May 10, 2026
SFTP transfer performance release. Bumps Cargo workspace version to 2.1.4 and refreshes README.md, CHANGELOG.md, debian/changelog, and the three man pages (bssh.1, bssh-server.8, bssh-keygen.1) to match. Three perf changes ship since v2.1.3: - Stream SFTP uploads/downloads in 255 KiB chunks instead of buffering whole files (#195) — peak RSS drops ~160x and uploads run ~11x faster on 1 GiB transfers, multi-GB transfers no longer OOM the client. - Pipeline up to 64 concurrent SFTP requests for upload/download (#196), with server-advertised read/write lengths capped against local maxima and the download reorder queue bounded across both in-flight and pending out-of-order responses. - Raise bssh-server SFTP MAX_READ_SIZE from 64 KiB to the 255 KiB SFTP standard (#197), cutting per-MiB request count on downloads from 16 to 4 when combined with client pipelining.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
`bssh-server` hard-capped every SFTP `READ` reply at 64 KiB (`MAX_READ_SIZE = 65536`) regardless of what the client requested. Both `bssh-russh-sftp` and OpenSSH's `sftp-server` use the SFTP standard `MAX_READ_LENGTH = 261120` (255 KiB) for request sizing, so a client asking for a 256 KiB chunk only got 64 KiB back, forcing it to issue four extra requests for the same byte stream.
This PR bumps `MAX_READ_SIZE` to `261120` so server replies match the standard chunk size used by the rest of the stack.
Why
Combined with client-side pipelining (#196), the per-MiB request count on downloads drops from 16 → 4. Each request still pays SFTP framing + russh dispatch + tokio task hop overhead, so cutting the request count is the closest single-line lever to OpenSSH's effective per-byte cost.
Memory
Handles are still capped at `MAX_HANDLES = 1000` per session, and each in-flight read still uses a single per-request buffer of this size (max ~255 KiB × in-flight requests). Worst-case is ~16 MiB per session at 64 in-flight, well below the previous unpatched download path which buffered the full file in RSS.
Test plan