added ", add_generation_prompt=True"
Browse files
README.md
CHANGED
|
@@ -115,7 +115,7 @@ and then apply the chat template to get a formatted prompt:
|
|
| 115 |
```
|
| 116 |
tokenizer = AutoTokenizer.from_pretrained('Trelis/Meta-Llama-3-8B-Instruct-function-calling', trust_remote_code=True)
|
| 117 |
|
| 118 |
-
prompt = tokenizer.apply_chat_template(prompt, tokenize=False)
|
| 119 |
```
|
| 120 |
If you are using a gated model, you need to first run:
|
| 121 |
```
|
|
|
|
| 115 |
```
|
| 116 |
tokenizer = AutoTokenizer.from_pretrained('Trelis/Meta-Llama-3-8B-Instruct-function-calling', trust_remote_code=True)
|
| 117 |
|
| 118 |
+
prompt = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
|
| 119 |
```
|
| 120 |
If you are using a gated model, you need to first run:
|
| 121 |
```
|