"-ml" instead of "-mg" for specifying the llama file
* talk-llama : talk with LLaMA AI * talk.llama : disable EOS token * talk-llama : add README instructions * ggml : fix build in debug