How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "QueryloopAI/Liberated-Miqu-70B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "QueryloopAI/Liberated-Miqu-70B",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/QueryloopAI/Liberated-Miqu-70B
Quick Links

Liberated Miqu 70B

image/jpeg

Liberated Miqu 70B is a fine-tune of Miqu-70B on Abacus AI's SystemChat dataset. This model has been trained on 2xA100 GPUs for 1 epoch.

πŸ† Evaluation results

Coming soon

Framework versions

  • Transformers 4.38.0.dev0
  • Pytorch 2.1.2+cu118
  • Datasets 2.17.0
  • Tokenizers 0.15.0
  • axolotl: 0.4.0

Built with Axolotl

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for QueryloopAI/Liberated-Miqu-70B

Finetuned
(25)
this model

Dataset used to train QueryloopAI/Liberated-Miqu-70B