OpenTransformer's picture
Q1_0_g128 CPU kernel fix + AVX2 SIMD (fork of PrismML-Eng/llama.cpp)
03ba2cd verified

llama.cpp/example/simple-chat

The purpose of this example is to demonstrate a minimal usage of llama.cpp to create a simple chat program using the chat template from the GGUF file.

./llama-simple-chat -m Meta-Llama-3.1-8B-Instruct.gguf -c 2048
...