Skip to content

Benchmark: async GPU decode via next_token_async (flare 0.2.15)#316

Merged
sauravpanda merged 1 commit intomainfrom
bench-0215-async-decode
Apr 23, 2026
Merged

Benchmark: async GPU decode via next_token_async (flare 0.2.15)#316
sauravpanda merged 1 commit intomainfrom
bench-0215-async-decode

Conversation

@sauravpanda
Copy link
Copy Markdown
Owner

@sauravpanda sauravpanda commented Apr 23, 2026

Flips the Flare benchmark to GPU-backed decode using the new async API landed in flare-web 0.2.14/0.2.15.

Result on SmolLM2-135M Q8_0 (M-series Mac, Chrome)

Flare MLC Transformers.js
Decode 173 tok/s 101 tok/s 16 tok/s
TTFT 137 ms 19 ms 4349 ms
Load 0.2 s 0.4 s 2.0 s

Flare now leads decode throughput by 71% over MLC. Load is 2× faster. TTFT still trails MLC because prefill runs on CPU — closing that gap is the next upstream refactor (async prefill propagation in flare-web).

Changes

  • Bump CDN pin 0.2.13 → 0.2.15 (adds next_token_async + wasm32 prefill CPU fallback)
  • Decode loop: await flareEngine.next_token_async() when available, fall back to sync next_token() otherwise
  • GPU is the default; ?gpu=0 opts into CPU-only for debugging

Test plan

  • npx jest — 62 passing
  • Manual run: 173 tok/s confirmed in screenshot

Summary by CodeRabbit

  • New Features

    • GPU is now enabled by default in benchmarks; use ?gpu=0 to disable.
    • Updated benchmark to use the latest CDN version.
  • Improvements

    • Enhanced token advancement with async processing when available for better performance.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 23, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0ef7eb6a-551b-41dd-bcc4-80569c2ba41d

📥 Commits

Reviewing files that changed from the base of the PR and between 4f73bfa and d3c6643.

📒 Files selected for processing (1)
  • examples/benchmark/index.html

📝 Walkthrough

Walkthrough

The benchmark's Flare integration is updated to version 0.2.15 with reversed GPU enabling logic (now enabled by default, disabled via ?gpu=0 instead). Token decoding now preferentially uses asynchronous next_token_async() when available, with synchronous fallback. Related logging is updated to reflect async decode behavior.

Changes

Cohort / File(s) Summary
Flare Benchmark Update
examples/benchmark/index.html
CDN version bumped to 0.2.15; GPU initialization flipped to default-enabled (opt-out via ?gpu=0); logging text updated for async decode; token advancement logic now prefers flareEngine.next_token_async() when present, falling back to next_token().

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested labels

size/M

Poem

🐰 Async dreams in GPU light,
Tokens race with quantum might,
No longer opt-in, now they fly,
Default speed across the sky—
Next\_token\_async leads the way, 🚀

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely describes the main change: implementing async GPU decode using next_token_async API and upgrading to flare 0.2.15.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch bench-0215-async-decode

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sauravpanda sauravpanda merged commit 7da5f50 into main Apr 23, 2026
10 checks passed
@sauravpanda sauravpanda deleted the bench-0215-async-decode branch April 23, 2026 23:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant