LFM2.5-1.2B: Upgrade to a quad-core CPU.
Browse files- README.md +1 -1
- src/config.py +1 -1
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
title: LiquidAI/LFM2.5-1.2B-Instruct
|
| 3 |
-
short_description: LFM2.5 (1.2B)
|
| 4 |
license: apache-2.0
|
| 5 |
emoji: ⚡
|
| 6 |
colorFrom: red
|
|
|
|
| 1 |
---
|
| 2 |
title: LiquidAI/LFM2.5-1.2B-Instruct
|
| 3 |
+
short_description: LFM2.5 (1.2B) run on Ollama using a quad-core CPU
|
| 4 |
license: apache-2.0
|
| 5 |
emoji: ⚡
|
| 6 |
colorFrom: red
|
src/config.py
CHANGED
|
@@ -18,7 +18,7 @@ This space run the <b><a href="https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instr
|
|
| 18 |
|
| 19 |
Official <b>documentation</b> for using Ollama with the OpenAI-Compatible API can be found <b><a href="https://docs.ollama.com/api/openai-compatibility" target="_blank">here</a></b>.<br><br>
|
| 20 |
|
| 21 |
-
LFM2.5 (1.2B) runs entirely on a <b>
|
| 22 |
|
| 23 |
The LFM2.5 (1.2B) model can also be viewed or downloaded from the official repository <b><a href="https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF" target="_blank">here</a></b>.<br><br>
|
| 24 |
|
|
|
|
| 18 |
|
| 19 |
Official <b>documentation</b> for using Ollama with the OpenAI-Compatible API can be found <b><a href="https://docs.ollama.com/api/openai-compatibility" target="_blank">here</a></b>.<br><br>
|
| 20 |
|
| 21 |
+
LFM2.5 (1.2B) runs entirely on <b>CPU</b>, utilizing a <b>quad-core (4 cores)</b> configuration. Thanks to its compact size, the model can operate efficiently on modest hardware.<br><br>
|
| 22 |
|
| 23 |
The LFM2.5 (1.2B) model can also be viewed or downloaded from the official repository <b><a href="https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF" target="_blank">here</a></b>.<br><br>
|
| 24 |
|