When enable_thinking=True, why doesn't the chat_template output end with "<think>

#1
by sxcasf - opened

Code:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer1 = AutoTokenizer.from_pretrained("Qwen/Qwen3-4B")
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text1 = tokenizer1.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True,
)

print(text1)

Output:

<|im_start|>user
Give me a short introduction to large language model.<|im_end|>
<|im_start|>assistant
sxcasf changed discussion status to closed

Sign up or log in to comment