perf: pipeline SFTP requests for upload/download (~2-3x speedup)#196
Merged
inureyes merged 4 commits intolablup:mainfrom May 10, 2026
Merged
perf: pipeline SFTP requests for upload/download (~2-3x speedup)#196inureyes merged 4 commits intolablup:mainfrom
inureyes merged 4 commits intolablup:mainfrom
Conversation
The high-level `AsyncWrite`/`AsyncRead` impls on `File` issue exactly one SFTP `WRITE`/`READ` at a time and `await` its `STATUS`/`DATA` reply before sending the next. Sustained throughput is therefore bounded by `chunk_size / RTT` — at 50 ms RTT with the default 256 KiB chunk that caps a single transfer at ~5 MiB/s no matter how fast the link is. Add two pipelined helpers on `File` that keep up to N SFTP requests in flight concurrently, mirroring how OpenSSH's `sftp(1)` client behaves (`-R 64` by default): * `File::write_all_pipelined<R: AsyncRead>(reader, max_inflight)` — reads chunks from `reader` and dispatches `session.write(...)` futures via `FuturesUnordered`, refilling the pipeline as in-flight writes complete. Memory bounded by `max_inflight * write_len`. * `File::read_to_writer_pipelined<W: AsyncWrite>(writer, max_inflight)` — symmetric for downloads. Out-of-order responses are buffered in a `BTreeMap` keyed by offset and flushed to `writer` as soon as the next-expected chunk arrives. Wire `Client::upload_file`/`download_file`/`upload_dir_recursive`/ `download_dir_recursive` to use the new helpers with `MAX_INFLIGHT_REQUESTS = 64`. Measured on macOS arm64 against `bssh-server` v2.1.3 on loopback with a 1 GiB file: | op | build | real | RSS | |----------|------------------------|---------|----------| | upload | vanilla v2.1.3 | 39.30s | 3.23 GB | | upload | streaming-only | 3.47s | 20 MB | | upload | streaming + pipelined | 2.27s | 49 MB | | download | vanilla v2.1.3 | 3.93s | 2.17 GB | | download | streaming-only | 3.41s | 16 MB | | download | streaming + pipelined | 1.34s | 288 MB | Pipelining adds ~+53% on upload and ~+155% on download throughput on top of the streaming patch (which already eliminated the whole-file load). Peak RSS stays well below the unpatched levels: download holds at most ~`max_inflight` chunks pending in the reorder map, and upload caps at `max_inflight * chunk_size + reader buffer`.
3 tasks
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…elining # Conflicts: # src/ssh/tokio_client/file_transfer.rs
inureyes
pushed a commit
that referenced
this pull request
May 10, 2026
The bssh-server hard-capped every SFTP `READ` reply at 64 KiB (`MAX_READ_SIZE = 65536`) regardless of what the client requested. `bssh-russh-sftp` and OpenSSH's `sftp-server` both use the SFTP standard `MAX_READ_LENGTH = 261120` (255 KiB) for request sizing, so a client asking for a 256 KiB chunk only ever got 64 KiB back, forcing it to issue four extra requests for the same byte stream. Bump `MAX_READ_SIZE` to `261120` so server replies match the standard chunk size used by the rest of the stack. Combined with client-side pipelining (#196), this directly cuts the per-MiB request count on downloads from 16 → 4. Memory exposure stays bounded: handles are still capped at `MAX_HANDLES = 1000` per session and each in-flight read still uses a single per-request buffer of this size (max ~255 KiB × in-flight requests).
inureyes
added a commit
that referenced
this pull request
May 10, 2026
SFTP transfer performance release. Bumps Cargo workspace version to 2.1.4 and refreshes README.md, CHANGELOG.md, debian/changelog, and the three man pages (bssh.1, bssh-server.8, bssh-keygen.1) to match. Three perf changes ship since v2.1.3: - Stream SFTP uploads/downloads in 255 KiB chunks instead of buffering whole files (#195) — peak RSS drops ~160x and uploads run ~11x faster on 1 GiB transfers, multi-GB transfers no longer OOM the client. - Pipeline up to 64 concurrent SFTP requests for upload/download (#196), with server-advertised read/write lengths capped against local maxima and the download reorder queue bounded across both in-flight and pending out-of-order responses. - Raise bssh-server SFTP MAX_READ_SIZE from 64 KiB to the 255 KiB SFTP standard (#197), cutting per-MiB request count on downloads from 16 to 4 when combined with client pipelining.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
The high-level
AsyncWrite/AsyncReadimpls onFileissue exactly one SFTPWRITE/READat a time andawaititsSTATUS/DATAreply before sending the next. Sustained throughput is therefore bounded bychunk_size / RTT— at 50 ms RTT with the default 256 KiB chunk that caps a single transfer at ~5 MiB/s no matter how fast the link is. This is#3in the SFTP-stack analysis ("the largest unrealized optimization").This PR adds two pipelined helpers on
Filethat keep up to N SFTP requests in flight concurrently, mirroring OpenSSHsftp(1)'s default of-R 64.Changes
crates/bssh-russh-sftp/Cargo.toml: addfutures = "0.3"(std+async-awaitfeatures only) forFuturesUnordered.crates/bssh-russh-sftp/src/client/fs/file.rs: two new public methods onFile:write_all_pipelined<R: AsyncRead>(reader, max_inflight) -> SftpResult<u64>: reads chunks fromreaderand dispatchessession.write(handle, offset, chunk)futures viaFuturesUnordered, topping up the pipeline as in-flight writes complete. Memory is bounded bymax_inflight * write_len.read_to_writer_pipelined<W: AsyncWrite>(writer, max_inflight) -> SftpResult<u64>: symmetric for downloads. Out-of-orderREADresponses are buffered by offset and flushed towriteronce the next-expected chunk arrives.src/ssh/tokio_client/file_transfer.rs: rewireupload_file/download_file/upload_dir_recursive/download_dir_recursiveto use the new helpers withMAX_INFLIGHT_REQUESTS = 64.ARCHITECTURE.mdanddocs/architecture/ssh-client.md: document bounded pipelined SFTP streaming.Review follow-up fixes
A post-implementation review found and fixed several edge cases before merge:
read_len/write_lenagainst local maxima, so a malicious or broken SFTP server cannot force huge client allocations through inflated limits.READresponse is delayed and later chunks arrive first.fstatsize information when available to stop scheduling unnecessary reads past EOF and to treat unexpected short reads before the known file size as an error.mainafter resolving the conflict with the streaming transfer work merged in perf: stream SFTP uploads/downloads instead of buffering whole file #195.Security review
max_inflight.Performance review
READ/WRITErequests.fstat-guided download path avoids speculative reads beyond the known remote file size when metadata is available.Measured impact (macOS arm64 → bssh-server v2.1.3, loopback, 1 GiB)
Pipelining adds +53% upload throughput and +155% download throughput on top of the streaming patch. End-to-end vs. vanilla v2.1.3: upload 17× faster (27 → 451 MiB/s), download 2.9× faster (261 → 764 MiB/s). Peak RSS stays well below the unpatched levels even with 64 in-flight chunks.
Notes
main, and this branch has been merged withorigin/mainwith the final file-transfer behavior preserved.max_inflight = 64matches OpenSSH's default. A future enhancement could expose this on the public Client API for users who want to tune for very high-RTT links or memory-constrained clients.Test plan
cargo fmt --checkcargo clippy -- -D warningscargo test -p bssh-russh-sftpcargo test --lib --verbose(1222 passed, 9 ignored)cargo test --tests --verbose -- --skip integration_test9ac61d29f92576dc31a573d24c0e7048e0213e8dbssh-serverv2.1.3 verified file integrity (size + md5 match source)