when using with opencode, tool calls have trouble
29.27.606.103 W Template supports tool calls but does not natively describe tools. The fallback behaviour used may produce bad results, inspect prompt w/ --verbose & consider overriding the template.
29.27.612.023 W srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing FilterExpression at line 55, column 63 in source:\n...- for args_name, args_value in arguments|items %}↵ {{- '<...\n ^\nError: Unknown (built-in) filter 'items' for type String","type":"server_error"}}
Hello, are you running with llama.cpp?
Make sure you download the latest version of the model because I believe they fixed the jinja template in the step3p5_flash_Q4_K_S-00001-of-00012.gguf file.
In any case I have extracted the jinja template which can be passed to any gguf model you find on Huggingface (I am using the bartowski ones and had the same problem with Droid).
This is a link to a gist with the jinja template and the code to use to extract jinja templates from GGUF files.
If running with llama.cpp specify the jinja template with --chat-template-file .
If you’re using the latest model, you might want to try this PR. There’s currently a tool calling issue on llama.cpp mainline
https://github.com/ggml-org/llama.cpp/pull/18675
Hello, are you running with llama.cpp?
Make sure you download the latest version of the model because I believe they fixed the jinja template in the step3p5_flash_Q4_K_S-00001-of-00012.gguf file.
In any case I have extracted the jinja template which can be passed to any gguf model you find on Huggingface (I am using the bartowski ones and had the same problem with Droid).
This is a link to a gist with the jinja template and the code to use to extract jinja templates from GGUF files.
If running with llama.cpp specify the jinja template with --chat-template-file .
YES, use the template from step3p5_flash_Q4_K_S-00001-of-00012.gguf file for tools call now much better. thanks!