ggml update to 0.11.0, llama-cpp update to 9030#51551
Conversation
|
| + find_package(httplib CONFIG REQUIRED) | ||
| add_subdirectory(common) | ||
| - add_subdirectory(vendor/cpp-httplib) |
There was a problem hiding this comment.
add_subdirectory(common)
- add_subdirectory(vendor/cpp-httplib)
+ find_package(httplib CONFIG REQUIRED)
+ add_library(cpp-httplib ALIAS httplib::httplib)would avoid of changing all the uses of cpp-httplib (and still export httplib::httplib to cmake config).
... And we probably need a find_dependency(httplib CONFIG) in the cmake confiug file (unless it is only used in executables - check the export).
There was a problem hiding this comment.
cpp-httplib is only linked privately by llama-common:
target_link_libraries(llama-common PRIVATE cpp-httplib)
and llama-common is not exported as a CMake target in the installed package. The generated llama-config.cmake only creates/imports the llama target, whose interface links to ggml. So httplib is not part of the public CMake interface for consumers.
https://github.com/ggml-org/llama.cpp/blob/master/cmake/llama-config.cmake.in#L20
|
Merged with master to pick up CUDA version change from #51210. |
|
Drafting due to legitimate build failures. |
| fix-vulkan-spv-shadowing.diff | ||
| fix-vk-32bit.diff |
There was a problem hiding this comment.
Note for reviewers: these patches have been turned into upstream PRs:
./vcpkg x-add-version --alland committing the result.