readme : add Vulkan notice (#2488)
* Add Vulkan notice in README.md * Fix formatting for Vulkan section in README.md * Fix formatting in README.md
This commit is contained in:
parent
1d5752fa42
commit
f7c99e49b3
11
README.md
11
README.md
@ -18,6 +18,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
|
|||||||
- Mixed F16 / F32 precision
|
- Mixed F16 / F32 precision
|
||||||
- [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
|
- [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
|
||||||
- Zero memory allocations at runtime
|
- Zero memory allocations at runtime
|
||||||
|
- Vulkan support
|
||||||
- Support for CPU-only inference
|
- Support for CPU-only inference
|
||||||
- [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
|
- [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
|
||||||
- [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
|
- [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
|
||||||
@ -429,6 +430,16 @@ make clean
|
|||||||
GGML_CUDA=1 make -j
|
GGML_CUDA=1 make -j
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Vulkan GPU support
|
||||||
|
Cross-vendor solution which allows you to accelerate workload on your GPU.
|
||||||
|
First, make sure your graphics card driver provides support for Vulkan API.
|
||||||
|
|
||||||
|
Now build `whisper.cpp` with Vulkan support:
|
||||||
|
```
|
||||||
|
make clean
|
||||||
|
make GGML_VULKAN=1
|
||||||
|
```
|
||||||
|
|
||||||
## BLAS CPU support via OpenBLAS
|
## BLAS CPU support via OpenBLAS
|
||||||
|
|
||||||
Encoder processing can be accelerated on the CPU via OpenBLAS.
|
Encoder processing can be accelerated on the CPU via OpenBLAS.
|
||||||
|
Loading…
Reference in New Issue
Block a user