ggml-cpu: Optimized risc-v cpu q1_0 dot#22768
Merged
xctan merged 1 commit intoggml-org:masterfrom May 7, 2026
Merged
Conversation
xctan
approved these changes
May 7, 2026
taimur-10x
approved these changes
May 7, 2026
CISC
approved these changes
May 7, 2026
Member
CISC
left a comment
There was a problem hiding this comment.
So, is this the favoured implementation?
Collaborator
|
K1 benchmarks show a speed regression with LMUL=2 vs LMUL=1. Compared to #22500, I find this implementation to be more refined, so I'm leaning towards this one. |
Contributor
Author
Collaborator
Updated my comment. Sorry for the confusion. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Hello, I have prepared optimized implementation of risc-v V ext q1_0 dot product (mainly for Bonsai LLM models), this is a continuation of #21636 for risc-v platform and squash of #31 with related discussion
This implementation uses two kernels with fixed vl for vlen 128 and 256+ and dispatch on runtime with vlenb, 64 is omitted due to V ext requiring 128+, vla is not useful for this simple case implementation. Uses negate qy - masked merge by qx as mask - vredsum - scalar accum with scales.
Benchmarks for Bonsai-1.7B
pp 64t/stg 16t/sVL128*VL256*forced VLEN 128 kernel with LMUL=2, for VLEN >= 256: LMUL=1Perplexity
Bonsai 1.7B, vl256 and vl128 both, 5x512 chunks of wiki.test.raw, baseline from cpu run of unpacked fp16 modelBenchmarks were performed with:
llama-bench -m Bonsai-1.7B.gguf -p 64 -n 16 -t 8 -r 3 -fa 1 -mmp 0Other people related
vlminstruction from this PR, which I didn't noticed in documentation at first.Requesting review from people who usually review such kind of changes: @am17an, @CISC, @taimur-10x.