whisper.cpp/ggml/src
Georgi Gerganov 4d73962da4 metal : small-batch mat-mul kernels (llama/10581)
* metal : small-batch mat-mul kernels

ggml-ci

* metal : add rest of types

ggml-ci

* metal : final adjustments

ggml-ci

* metal : add comments

ggml-ci
2024-12-08 20:14:35 +02:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (llama/0) 2024-11-20 21:00:08 +02:00
ggml-blas ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-cann CANN: RoPE operator optimization (llama/10563) 2024-12-08 20:14:35 +02:00
ggml-cpu ggml-cpu: replace AArch64 NEON assembly with intrinsics in ggml_gemv_q4_0_4x4_q8_0() (llama/10567) 2024-12-08 20:14:35 +02:00
ggml-cuda Add some minimal optimizations for CDNA (llama/10498) 2024-12-08 20:14:35 +02:00
ggml-hip ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-kompute kompute : improve backend to pass test_backend_ops (llama/10542) 2024-12-08 20:14:35 +02:00
ggml-metal metal : small-batch mat-mul kernels (llama/10581) 2024-12-08 20:14:35 +02:00
ggml-musa mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (llama/10516) 2024-12-08 20:14:35 +02:00
ggml-rpc ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-sycl SYCL: Fix and switch to GGML_LOG system instead of fprintf (llama/10579) 2024-12-08 20:14:35 +02:00
ggml-vulkan vulkan: Dynamic subgroup size support for Q6_K mat_vec (llama/10536) 2024-12-08 20:14:35 +02:00
CMakeLists.txt cmake : enable warnings in llama (llama/10474) 2024-12-08 20:14:35 +02:00
ggml-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (llama/10324) 2024-11-20 21:00:08 +02:00
ggml-aarch64.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-alloc.c ggml: new optimization interface (ggml/988) 2024-11-20 21:00:08 +02:00
ggml-backend-impl.h ggml : add support for dynamic loading of backends (llama/10469) 2024-12-08 20:14:35 +02:00
ggml-backend-reg.cpp llama : accept a list of devices to use to offload a model (llama/10497) 2024-12-08 20:14:35 +02:00
ggml-backend.cpp ggml-opt: fix data corruption (ggml/1022) 2024-12-08 20:14:35 +02:00
ggml-common.h ggml-cpu: support IQ4_NL_4_4 by runtime repack (llama/10541) 2024-12-08 20:14:35 +02:00
ggml-impl.h Do not include arm_neon.h when compiling CUDA code (ggml/1028) 2024-12-08 20:14:35 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-12-08 20:14:35 +02:00
ggml-quants.c ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-quants.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.cpp ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-threading.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml.c ggml-cpu: support IQ4_NL_4_4 by runtime repack (llama/10541) 2024-12-08 20:14:35 +02:00