whisper.cpp/ggml/src
2024-11-20 21:00:08 +02:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas cuda : fix CUDA_FLAGS not being applied (llama/10403) 2024-11-20 21:00:08 +02:00
ggml-cann ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cpu ggml : fix undefined reference to 'getcpu' (llama/10354) 2024-11-20 21:00:08 +02:00
ggml-cuda ggml : sync resolve (skip) (#0) 2024-11-20 21:00:08 +02:00
ggml-hip CUDA: remove DMMV, consolidate F16 mult mat vec (llama/10318) 2024-11-20 21:00:08 +02:00
ggml-kompute ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-metal ggml : sync resolve (skip) (#0) 2024-11-20 21:00:08 +02:00
ggml-musa ggml : sync resolve (skip) (#0) 2024-11-20 21:00:08 +02:00
ggml-rpc ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-sycl sycl : Add option to set the SYCL architecture for all targets (llama/10266) 2024-11-20 21:00:08 +02:00
ggml-vulkan vulkan: Optimize soft_max (llama/10301) 2024-11-20 21:00:08 +02:00
CMakeLists.txt Add required ggml-base and backend libs to cmake pkg (llama/10407) 2024-11-20 21:00:08 +02:00
ggml-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (llama/10324) 2024-11-20 21:00:08 +02:00
ggml-aarch64.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-alloc.c ggml: new optimization interface (ggml/988) 2024-11-20 21:00:08 +02:00
ggml-backend-impl.h llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
ggml-backend-reg.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-backend.cpp ggml : sync resolve (skip) (#0) 2024-11-20 21:00:08 +02:00
ggml-common.h ggml-quants : ternary packing for TriLMs and BitNet b1.58 (llama/8151) 2024-09-24 19:45:08 +03:00
ggml-impl.h ggml: new optimization interface (ggml/988) 2024-11-20 21:00:08 +02:00
ggml-opt.cpp ggml : inttypes.h -> cinttypes (llama/0) 2024-11-20 21:00:08 +02:00
ggml-quants.c ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml.c ggml : fix compile warnings (llama/0) 2024-11-20 21:00:08 +02:00