whisper.cpp/ggml/src/ggml-cuda
2024-11-01 10:19:05 +02:00
..
template-instances CUDA: MMQ code deduplication + iquant support (llama/8495) 2024-08-08 22:48:46 +03:00
vendors musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (llama/9526) 2024-09-24 19:45:08 +03:00
acc.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
acc.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
arange.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
arange.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
argmax.cu ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-05 15:23:51 +03:00
argmax.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-05 15:23:51 +03:00
argsort.cu ggml : reduce hash table reset cost (llama/8698) 2024-08-08 22:48:46 +03:00
argsort.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
binbcast.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-24 19:45:08 +03:00
binbcast.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-24 19:45:08 +03:00
clamp.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
clamp.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
common.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-05 15:23:51 +03:00
concat.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
concat.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
conv-transpose-1d.cu feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 14:53:55 +03:00
conv-transpose-1d.cuh feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 14:53:55 +03:00
convert.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
convert.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
count-equal.cu ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-05 15:23:51 +03:00
count-equal.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-05 15:23:51 +03:00
cpy.cu cuda: add q8_0->f32 cpy operation (llama/9571) 2024-09-24 19:45:08 +03:00
cpy.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
cross-entropy-loss.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-24 19:45:08 +03:00
cross-entropy-loss.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-24 19:45:08 +03:00
dequantize.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
diagmask.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
diagmask.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
dmmv.cu Vectorize load instructions in dmmv f16 CUDA kernel (llama/9816) 2024-11-01 10:19:05 +02:00
dmmv.cuh cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X (llama/8800) 2024-08-08 22:48:46 +03:00
fattn-common.cuh CPU/CUDA: Gemma 2 FlashAttention support (llama/8542) 2024-08-28 13:22:20 +03:00
fattn-tile-f16.cu ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-05 15:23:51 +03:00
fattn-tile-f16.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
fattn-tile-f32.cu musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (llama/9526) 2024-09-24 19:45:08 +03:00
fattn-tile-f32.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
fattn-vec-f16.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-05 15:23:51 +03:00
fattn-vec-f32.cuh CPU/CUDA: Gemma 2 FlashAttention support (llama/8542) 2024-08-28 13:22:20 +03:00
fattn-wmma-f16.cuh CPU/CUDA: Gemma 2 FlashAttention support (llama/8542) 2024-08-28 13:22:20 +03:00
fattn.cu CUDA: enable Gemma FA for HIP/Pascal (llama/9581) 2024-09-24 19:45:08 +03:00
fattn.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
getrows.cu ggml : reduce hash table reset cost (llama/8698) 2024-08-08 22:48:46 +03:00
getrows.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
im2col.cu CUDA: fix 1D im2col, add tests (ggml/993) 2024-11-01 10:19:05 +02:00
im2col.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
mma.cuh CUDA: optimize and refactor MMQ (llama/8416) 2024-08-08 22:48:46 +03:00
mmq.cu CUDA: fix --split-mode row race condition (llama/9413) 2024-09-24 19:45:08 +03:00
mmq.cuh CUDA: fix --split-mode row race condition (llama/9413) 2024-09-24 19:45:08 +03:00
mmvq.cu ggml : reduce hash table reset cost (llama/8698) 2024-08-08 22:48:46 +03:00
mmvq.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
norm.cu ggml : add epsilon as a parameter for group_norm (llama/8818) 2024-08-08 22:48:46 +03:00
norm.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
opt-step-adamw.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-24 19:45:08 +03:00
opt-step-adamw.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-24 19:45:08 +03:00
out-prod.cu ggml : fix builds (llama/0) 2024-09-24 19:45:08 +03:00
out-prod.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-24 19:45:08 +03:00
pad.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
pad.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
pool2d.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
pool2d.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
quantize.cu ggml : reduce hash table reset cost (llama/8698) 2024-08-08 22:48:46 +03:00
quantize.cuh CUDA: optimize and refactor MMQ (llama/8416) 2024-08-08 22:48:46 +03:00
rope.cu ggml : move rope type enum to ggml.h (llama/8949) 2024-08-28 13:22:20 +03:00
rope.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
rwkv-wkv.cu RWKV v6: RWKV_WKV op CUDA implementation (llama/9454) 2024-09-24 19:45:08 +03:00
rwkv-wkv.cuh RWKV v6: RWKV_WKV op CUDA implementation (llama/9454) 2024-09-24 19:45:08 +03:00
scale.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
scale.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
softmax.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
softmax.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
sum.cu CUDA: fix sum.cu compilation for CUDA < 11.7 (llama/9562) 2024-09-24 19:45:08 +03:00
sum.cuh tests: add gradient tests for all backends (ggml/932) 2024-09-24 19:45:08 +03:00
sumrows.cu feat: ref. cross entropy, add CUDA, fix grad test (ggml/929) 2024-08-28 13:22:20 +03:00
sumrows.cuh feat: ref. cross entropy, add CUDA, fix grad test (ggml/929) 2024-08-28 13:22:20 +03:00
tsembd.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
tsembd.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
unary.cu RWKV v6: RWKV_WKV op CUDA implementation (llama/9454) 2024-09-24 19:45:08 +03:00
unary.cuh RWKV v6: RWKV_WKV op CUDA implementation (llama/9454) 2024-09-24 19:45:08 +03:00
upscale.cu whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
upscale.cuh whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
vecdotq.cuh CUDA: MMQ code deduplication + iquant support (llama/8495) 2024-08-08 22:48:46 +03:00