whisper.cpp/ggml
Diego Devesa 63a4e09a0f ggml : fix memory leaks when loading invalid gguf files (llama/10094)
* ggml : fix gguf string leak when reading kv pairs fails

* ggml : avoid crashing with GGML_ABORT when the KV has an invalid type

* ggml : avoid crashing on failed memory allocations when loading a gguf file
2024-11-15 15:21:04 +02:00
..
cmake whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
include llama : refactor model loader with backend registry (llama/10026) 2024-11-15 15:21:04 +02:00
src ggml : fix memory leaks when loading invalid gguf files (llama/10094) 2024-11-15 15:21:04 +02:00
.gitignore whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00
CMakeLists.txt add amx kernel for gemm (llama/8998) 2024-11-01 10:19:05 +02:00
ggml_vk_generate_shaders.py whisper : reorganize source code + improve CMake (#2256) 2024-06-26 19:34:09 +03:00