llama : add missing call to ggml_backend_load_all() #22752
+6
−0
Merged
background
wait
wait-all
cancel
Loading