Commit graph

35 commits

Author SHA1 Message Date
R. Ryantm c70ff30bde llama-cpp: 2454 -> 2481 2024-03-21 17:34:35 +00:00
Someone e7797267a2
Merge pull request #281576 from yannham/refactor/cuda-setup-hooks-refactor
cudaPackages: generalize and refactor setup hooks
2024-03-19 20:06:18 +00:00
R. Ryantm 1c2a0b6df9 llama-cpp: 2424 -> 2454 2024-03-18 12:50:17 +00:00
Yann Hamdaoui 63746cac08
cudaPackages: generalize and refactor setup hook
This PR refactor CUDA setup hooks, and in particular
autoAddOpenGLRunpath and autoAddCudaCompatRunpathHook, that were using a
lot of code in common (in fact, I introduced the latter by copy pasting
most of the bash script of the former). This is not satisfying for
maintenance, as a recent patch showed, because we need to duplicate
changes to both hooks.

This commit abstract the common part in a single shell script that
applies a generic patch action to every elf file in the output. For
autoAddOpenGLRunpath the action is just addOpenGLRunpath (now
addDriverRunpath), and is few line function for
autoAddCudaCompatRunpathHook.

Doing so, we also takes the occasion to use the newer addDriverRunpath
instead of the previous addOpenGLRunpath, and rename the CUDA hook to
reflect that as well.

Co-Authored-By: Connor Baker <connor.baker@tweag.io>
2024-03-15 15:54:21 +01:00
R. Ryantm 4744ccc2db llama-cpp: 2382 -> 2424 2024-03-14 20:36:15 +00:00
R. Ryantm bab5a87ffc llama-cpp: 2346 -> 2382 2024-03-10 12:09:01 +00:00
R. Ryantm 821ea0b581 llama-cpp: 2294 -> 2346 2024-03-05 19:34:21 +00:00
happysalada 03173009d5 llama-cpp: 2249 -> 2294; bring upstream flake 2024-02-28 19:44:34 -05:00
R. Ryantm 33294caa77 llama-cpp: 2212 -> 2249 2024-02-23 15:04:25 +00:00
R. Ryantm 3e909c9e10 llama-cpp: 2167 -> 2212 2024-02-20 05:27:18 +00:00
R. Ryantm efa55a0426 llama-cpp: 2135 -> 2167 2024-02-16 21:01:12 +00:00
R. Ryantm aee2928614 llama-cpp: 2105 -> 2135 2024-02-13 10:34:27 +00:00
R. Ryantm e5a9f1c720 llama-cpp: 2074 -> 2105 2024-02-09 03:07:40 +00:00
R. Ryantm 27398a3fe2 llama-cpp: 2050 -> 2074 2024-02-06 00:29:01 +00:00
R. Ryantm 3a67f01b7f llama-cpp: 1892 -> 2050 2024-02-02 19:34:54 +00:00
happysalada aaa2d4b738 llama-cpp: 1848->1892; add static build mode 2024-01-19 08:42:56 -05:00
Alex Martens 49309c0d27 llama-cpp: 1742 -> 1848 2024-01-12 19:01:51 -08:00
Weijia Wang 1f54e5e2a6
Merge pull request #278120 from r-ryantm/auto-update/llama-cpp
llama-cpp: 1710 -> 1742
2024-01-13 02:57:28 +01:00
R. Ryantm b63bdd46cd llama-cpp: 1710 -> 1742 2024-01-01 18:52:11 +00:00
happysalada 47fc482e58 llama-cpp: fix cuda support; integrate upstream 2023-12-31 16:57:28 +01:00
Nick Cao e51a04fa37
Merge pull request #277451 from accelbread/llama-cpp-update
llama-cpp: 1671 -> 1710
2023-12-29 10:42:00 -05:00
Archit Gupta ab64ae8fdd llama-cpp: 1671 -> 1710 2023-12-28 18:19:55 -08:00
Archit Gupta 6cf4c910f9 llama-cpp: change default value of openblasSupport
The previous default caused build failures when `config.rocmSupport` was
enabled, since rocmSupport conflicts with openblasSupport.
2023-12-28 18:03:30 -08:00
Nick Cao 51b96e8410
Merge pull request #275807 from KaiHa/pr-llama-cpp-assert
llama-cpp: assert that only one of the *Support arguments is true
2023-12-22 08:36:34 -05:00
Kai Harries ded8563fb3 llama-cpp: assert that only one of the *Support arguments is true
When experimenting with llama-cpp I experienced a mysterious build
error that in the end I tracked down to me setting openclSupport and
openblasSupport to true.  To prevent others from this mistake add an
assert that complains if more than one of the arguments openclSupport,
openblasSupport and rocmSupport is set to true.
2023-12-22 09:00:32 +01:00
Сухарик 6141d1bdd5 llama-cpp: 1573 -> 1671 2023-12-21 21:37:24 +03:00
R. Ryantm 72268042cf llama-cpp: 1538 -> 1573 2023-11-28 07:26:02 +00:00
annalee 53aeb5c67b llama-cpp: 1483 -> 1538 2023-11-19 10:25:46 +01:00
annalee 05967f9a2c llama-cpp: fix build due to openblas update
add patch so build checks for openblas64 module

https://github.com/NixOS/nixpkgs/pull/255443 updated from openblas
0.3.21 -> 0.3.24.  openblas 0.3.22 moved openblas.pc -> openblas64.pc

https://github.com/OpenMathLib/OpenBLAS/issues/3790
2023-11-19 10:25:46 +01:00
Martin Weinelt ea1cd4ef8e
treewide: use config.rocmSupport
Makes use of the per nixpkgs config flag to enable rocmSupport in
packages that support it.
2023-11-14 01:51:57 +01:00
OTABI Tomoya 5a36456ad6
Merge pull request #265462 from r-ryantm/auto-update/llama-cpp
llama-cpp: 1469 -> 1483
2023-11-10 18:22:02 +09:00
Matthieu Coudron ba774d337e
Merge pull request #263320 from jfvillablanca/llm-ls
llm-ls: init at 0.4.0
2023-11-06 11:09:01 +01:00
jfvillablanca ab1da43942 llm-ls: init at 0.4.0 2023-11-06 08:38:31 +08:00
R. Ryantm 575af23354 llama-cpp: 1469 -> 1483 2023-11-04 13:58:41 +00:00
Enno Richter ff77d3c409 llama-cpp: init at 1469 2023-11-02 09:31:40 +01:00