.. |
ggml
|
cmake : enable warnings in llama (llama/10474)
|
2024-12-08 20:14:35 +02:00 |
template-instances
|
CUDA: MMQ code deduplication + iquant support (llama/8495)
|
2024-08-08 22:48:46 +03:00 |
vendors
|
Add some minimal optimizations for CDNA (llama/10498)
|
2024-12-08 20:14:35 +02:00 |
acc.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
acc.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
arange.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
arange.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
argmax.cu
|
cuda : optimize argmax (llama/10441)
|
2024-12-08 20:14:35 +02:00 |
argmax.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-05 15:23:51 +03:00 |
argsort.cu
|
ggml : reduce hash table reset cost (llama/8698)
|
2024-08-08 22:48:46 +03:00 |
argsort.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
binbcast.cu
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-24 19:45:08 +03:00 |
binbcast.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-24 19:45:08 +03:00 |
clamp.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
clamp.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
CMakeLists.txt
|
ggml : sync resolve (skip) (#0)
|
2024-11-20 21:00:08 +02:00 |
common.cuh
|
Add some minimal optimizations for CDNA (llama/10498)
|
2024-12-08 20:14:35 +02:00 |
concat.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
concat.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
conv-transpose-1d.cu
|
feat: cuda implementation for ggml_conv_transpose_1d (ggml/854)
|
2024-07-08 14:53:55 +03:00 |
conv-transpose-1d.cuh
|
feat: cuda implementation for ggml_conv_transpose_1d (ggml/854)
|
2024-07-08 14:53:55 +03:00 |
convert.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
convert.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
count-equal.cu
|
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213)
|
2024-11-15 15:21:04 +02:00 |
count-equal.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-05 15:23:51 +03:00 |
cpy.cu
|
cuda: add q8_0->f32 cpy operation (llama/9571)
|
2024-09-24 19:45:08 +03:00 |
cpy.cuh
|
increase cuda_cpy block size (ggml/996)
|
2024-11-01 10:19:05 +02:00 |
cross-entropy-loss.cu
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-24 19:45:08 +03:00 |
cross-entropy-loss.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-24 19:45:08 +03:00 |
dequantize.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
diagmask.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
diagmask.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
dmmv.cu
|
Vectorize load instructions in dmmv f16 CUDA kernel (llama/9816)
|
2024-11-01 10:19:05 +02:00 |
dmmv.cuh
|
cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X (llama/8800)
|
2024-08-08 22:48:46 +03:00 |
fattn-common.cuh
|
ggml : build backends as libraries (llama/10256)
|
2024-11-20 21:00:08 +02:00 |
fattn-tile-f16.cu
|
ggml : build backends as libraries (llama/10256)
|
2024-11-20 21:00:08 +02:00 |
fattn-tile-f16.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
fattn-tile-f32.cu
|
ggml : build backends as libraries (llama/10256)
|
2024-11-20 21:00:08 +02:00 |
fattn-tile-f32.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
fattn-vec-f16.cuh
|
CUDA: remove unnecessary warp reduce in FA (ggml/1032)
|
2024-12-08 20:14:35 +02:00 |
fattn-vec-f32.cuh
|
CUDA: remove unnecessary warp reduce in FA (ggml/1032)
|
2024-12-08 20:14:35 +02:00 |
fattn-wmma-f16.cuh
|
ggml : build backends as libraries (llama/10256)
|
2024-11-20 21:00:08 +02:00 |
fattn.cu
|
metal : optimize FA kernels (llama/10171)
|
2024-11-15 15:21:04 +02:00 |
fattn.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
getrows.cu
|
ggml : reduce hash table reset cost (llama/8698)
|
2024-08-08 22:48:46 +03:00 |
getrows.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
ggml-cuda.cu
|
Add some minimal optimizations for CDNA (llama/10498)
|
2024-12-08 20:14:35 +02:00 |
im2col.cu
|
CUDA: fix 1D im2col, add tests (ggml/993)
|
2024-11-01 10:19:05 +02:00 |
im2col.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
mma.cuh
|
CUDA: optimize and refactor MMQ (llama/8416)
|
2024-08-08 22:48:46 +03:00 |
mmq.cu
|
Add some minimal optimizations for CDNA (llama/10498)
|
2024-12-08 20:14:35 +02:00 |
mmq.cuh
|
Add some minimal optimizations for CDNA (llama/10498)
|
2024-12-08 20:14:35 +02:00 |
mmv.cu
|
CUDA: remove DMMV, consolidate F16 mult mat vec (llama/10318)
|
2024-11-20 21:00:08 +02:00 |
mmv.cuh
|
CUDA: remove DMMV, consolidate F16 mult mat vec (llama/10318)
|
2024-11-20 21:00:08 +02:00 |
mmvq.cu
|
Add some minimal optimizations for CDNA (llama/10498)
|
2024-12-08 20:14:35 +02:00 |
mmvq.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
norm.cu
|
ggml : add epsilon as a parameter for group_norm (llama/8818)
|
2024-08-08 22:48:46 +03:00 |
norm.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
opt-step-adamw.cu
|
ggml: new optimization interface (ggml/988)
|
2024-11-20 21:00:08 +02:00 |
opt-step-adamw.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-24 19:45:08 +03:00 |
out-prod.cu
|
ggml : fix builds (llama/0)
|
2024-09-24 19:45:08 +03:00 |
out-prod.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-24 19:45:08 +03:00 |
pad.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
pad.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
pool2d.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
pool2d.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
quantize.cu
|
cuda : optimize argmax (llama/10441)
|
2024-12-08 20:14:35 +02:00 |
quantize.cuh
|
CUDA: optimize and refactor MMQ (llama/8416)
|
2024-08-08 22:48:46 +03:00 |
rope.cu
|
ggml : move rope type enum to ggml.h (llama/8949)
|
2024-08-28 13:22:20 +03:00 |
rope.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
rwkv-wkv.cu
|
RWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
|
2024-09-24 19:45:08 +03:00 |
rwkv-wkv.cuh
|
RWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
|
2024-09-24 19:45:08 +03:00 |
scale.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
scale.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
softmax.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
softmax.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
sum.cu
|
ggml : build backends as libraries (llama/10256)
|
2024-11-20 21:00:08 +02:00 |
sum.cuh
|
tests: add gradient tests for all backends (ggml/932)
|
2024-09-24 19:45:08 +03:00 |
sumrows.cu
|
feat: ref. cross entropy, add CUDA, fix grad test (ggml/929)
|
2024-08-28 13:22:20 +03:00 |
sumrows.cuh
|
feat: ref. cross entropy, add CUDA, fix grad test (ggml/929)
|
2024-08-28 13:22:20 +03:00 |
tsembd.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
tsembd.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
unary.cu
|
RWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
|
2024-09-24 19:45:08 +03:00 |
unary.cuh
|
RWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
|
2024-09-24 19:45:08 +03:00 |
upscale.cu
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
upscale.cuh
|
whisper : reorganize source code + improve CMake (#2256)
|
2024-06-26 19:34:09 +03:00 |
vecdotq.cuh
|
CUDA: MMQ code deduplication + iquant support (llama/8495)
|
2024-08-08 22:48:46 +03:00 |
wkv6.cu
|
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration (llama/10133)
|
2024-11-15 15:21:04 +02:00 |
wkv6.cuh
|
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration (llama/10133)
|
2024-11-15 15:21:04 +02:00 |