whisper.cpp/ggml/src
Ma Mingfei e1936eb2a5 add amx kernel for gemm (llama/8998)
add intel amx isa detection

add vnni kernel for gemv cases

add vnni and amx kernel support for block_q8_0

code cleanup

fix packing B issue

enable openmp

fine tune amx kernel

switch to aten parallel pattern

add error message for nested parallelism

code cleanup

add f16 support in ggml-amx

add amx kernels for QK_K quant formats: Q4_K, Q5_K, Q6_K and IQ4_XS

update CMakeList

update README

fix some compilation warning

fix compiler warning when amx is not enabled

minor change

ggml-ci

move ggml_amx_init from ggml.c to ggml-amx/mmq.cpp

ggml-ci

update CMakeLists with -mamx-tile, -mamx-int8 and -mamx-bf16

ggml-ci

add amx as an ggml-backend

update header file, the old path for immintrin.h has changed to ggml-cpu-impl.h

minor change

update CMakeLists.txt

minor change

apply weight prepacking in set_tensor method in ggml-backend

fix compile error

ggml-ci

minor change

ggml-ci

update CMakeLists.txt

ggml-ci

add march dependency

minor change

ggml-ci

change ggml_backend_buffer_is_host to return false for amx backend

ggml-ci

fix supports_op

use device reg for AMX backend

ggml-ci

minor change

ggml-ci

minor change

fix rebase

set .buffer_from_host_ptr to be false for AMX backend
2024-11-01 10:19:05 +02:00
..
ggml-cann cann: fix crash when llama-bench is running on multiple cann devices (llama/9627) 2024-10-03 12:22:17 +03:00
ggml-cuda CUDA: fix 1D im2col, add tests (ggml/993) 2024-11-01 10:19:05 +02:00
ggml-sycl Fixed dequant precision issues in Q4_1 and Q5_1 (llama/9711) 2024-10-05 15:23:51 +03:00
kompute-shaders whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
vulkan-shaders vulkan : argsort barriers must be under uniform control flow (ggml/951) 2024-10-03 12:22:17 +03:00
CMakeLists.txt add amx kernel for gemm (llama/8998) 2024-11-01 10:19:05 +02:00
ggml-aarch64.c ggml : add run-time detection of neon, i8mm and sve (llama/9331) 2024-10-03 12:22:17 +03:00
ggml-aarch64.h ggml : add ggml-aarch64 (ggml/0) 2024-08-08 22:48:46 +03:00
ggml-alloc.c ggml : move more prints to the ggml log system (llama/9839) 2024-11-01 10:19:05 +02:00
ggml-backend-impl.h ggml : add backend registry / device interfaces to BLAS backend (llama/9752) 2024-11-01 10:19:05 +02:00
ggml-backend.cpp add amx kernel for gemm (llama/8998) 2024-11-01 10:19:05 +02:00
ggml-blas.cpp ggml : move more prints to the ggml log system (llama/9839) 2024-11-01 10:19:05 +02:00
ggml-cann.cpp Fix cann compilation error (llama/9891) 2024-11-01 10:19:05 +02:00
ggml-common.h ggml-quants : ternary packing for TriLMs and BitNet b1.58 (llama/8151) 2024-09-24 19:45:08 +03:00
ggml-cpu-impl.h ggml : add ggml-cpu-impl.h (skip) (#0) 2024-09-24 19:45:08 +03:00
ggml-cuda.cu CUDA: fix 1D im2col, add tests (ggml/993) 2024-11-01 10:19:05 +02:00
ggml-impl.h fix: use vm_allocate to allocate CPU backend buffer on macOS (llama/9875) 2024-11-01 10:19:05 +02:00
ggml-kompute.cpp ggml-backend : add device and backend reg interfaces (llama/9707) 2024-10-05 15:23:51 +03:00
ggml-metal.m ggml : add metal backend registry / device (llama/9713) 2024-11-01 10:19:05 +02:00
ggml-metal.metal metal : use F32 prec for K*Q in vec FA (llama/9595) 2024-09-24 19:45:08 +03:00
ggml-quants.c ggml : add run-time detection of neon, i8mm and sve (llama/9331) 2024-10-03 12:22:17 +03:00
ggml-quants.h ggml : add run-time detection of neon, i8mm and sve (llama/9331) 2024-10-03 12:22:17 +03:00
ggml-rpc.cpp rpc : add backend registry / device interfaces (llama/9812) 2024-11-01 10:19:05 +02:00
ggml-sycl.cpp ggml-backend : add device and backend reg interfaces (llama/9707) 2024-10-05 15:23:51 +03:00
ggml-vulkan.cpp vulkan : add backend registry / device interfaces (llama/9721) 2024-11-01 10:19:05 +02:00
ggml.c add amx kernel for gemm (llama/8998) 2024-11-01 10:19:05 +02:00
sgemm.cpp whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
sgemm.h whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00