GGUF Files for Andy-Feather-V2-700m
These are the GGUF files for haphazardlyinc/Andy-Feather-V2-700m.
Downloads
| GGUF Link | Quantization | Description |
|---|---|---|
| Download | Q2_K | Lowest quality |
| Download | IQ3_XS | Integer quant |
| Download | Q3_K_S | |
| Download | IQ3_S | Integer quant, preferable over Q3_K_S |
| Download | IQ3_M | Integer quant |
| Download | Q3_K_M | |
| Download | Q3_K_L | |
| Download | IQ4_XS | Integer quant |
| Download | Q4_K_S | Fast with good performance |
| Download | Q4_K_M | Recommended: Perfect mix of speed and performance |
| Download | Q5_K_S | |
| Download | Q5_K_M | |
| Download | Q6_K | Very good quality |
| Download | Q8_0 | Best quality |
| Download | f16 | Full precision, don't bother; use a quant |
Note from Flexan
I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet. This process is not yet automated and I download, convert, quantize, and upload them by hand, usually for models I deem interesting and wish to try out.
If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding the model, please refer to the original model repo.
Model Card for Andy Feather 700M
⚠️⚠️⚠️IMPORTANT⚠️⚠️⚠️ In its current state, this model DOES NOT perform very well with Mindcraft and can only do very rudimentary tasks. It is a HUGE step up from V1, but still has absolutely ABYSMAL performance.
This model is a fine-tuned LoRA adapter built on top of LiquidAI/LFM2-700M.
It is designed for CPU inference or those who are GPU poor, requiring sub 1gb of memory to load the model at Q8 precision.
The model is NOT compatible with Ollama, it is highly recommended to use LMStudio instead. Here is an example Mindcraft profile:
{
"name": "andy",
"model": {
"api": "openai",
"url": "http://localhost:1234/v1",
"model": "Andy-Feather-V2-700m"
},
"embedding": {
"api": "openai",
"url": "http://localhost:1234/v1",
"model": "text-embedding-nomic-embed-text-v1.5"
}
}
And an example Mindcraft keys.json:
{
"OPENAI_API_KEY": "http://localhost:1234/v1",
"OPENAI_ORG_ID": "",
"GEMINI_API_KEY": "",
"ANTHROPIC_API_KEY": "",
"REPLICATE_API_KEY": "",
"GROQCLOUD_API_KEY": "",
"HUGGINGFACE_API_KEY": "",
"QWEN_API_KEY": "",
"XAI_API_KEY": "",
"MISTRAL_API_KEY": "",
"DEEPSEEK_API_KEY": "",
"GHLF_API_KEY": "",
"HYPERBOLIC_API_KEY": "",
"NOVITA_API_KEY": "",
"OPENROUTER_API_KEY": "",
"CEREBRAS_API_KEY": "",
"MERCURY_API_KEY":""
}
Training Data
This model was trained on the following datasets:
- Sweaterdog/Andy-base-2
- Sweaterdog/Andy-4-base
- Sweaterdog/Andy-4-FT
Dataset License
The training data is subject to the Andy 1.0 License
This work uses data and models created by @Sweaterdog.
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = {2020},
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
@misc{liquidai_lfm2_700m,
title = {LFM2-700M},
author = {Liquid AI},
year = {2024},
howpublished = {\url{https://huggingface.co/LiquidAI/LFM2-700M}}
}
- Downloads last month
- 49
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit