* ggml : drop support for QK_K=64 ggml-ci * opencl : restore QK_K=256 define
* sync : update scripts * sync : ggml * talk-llama : sync llama.cpp * make : WHISPER_CUBLAS -> WHISPER_CUDA * ci : try to fix sycl build * talk-llama : fix make build