mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-10-27 08:21:30 +00:00
10 lines
360 B
Plaintext
10 lines
360 B
Plaintext
-r ./requirements-convert_legacy_llama.txt
|
|
--extra-index-url https://download.pytorch.org/whl/cpu
|
|
|
|
## Embedding Gemma requires PyTorch 2.6.0 or later
|
|
torch~=2.6.0; platform_machine != "s390x"
|
|
|
|
# torch s390x packages can only be found from nightly builds
|
|
--extra-index-url https://download.pytorch.org/whl/nightly
|
|
torch>=0.0.0.dev0; platform_machine == "s390x"
|