A newer version of this model is available: qikp/kite-3.1-20m

Kite

🎉 You are looking at Kite 1.6, which is now trained using pika!

Kite is a small, trained, 1 million parameter language model, without any special optimizations.

Training

It was trained on this dataset using 20000 steps, 1 epoch, 1 batch size, and the pika tokenizer.

Limitations

Due to its size, the model is not suitable for production workloads.

Downloads last month
12
Safetensors
Model size
999k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for qikp/kite-1.6-1m

Finetuned
(3)
this model

Dataset used to train qikp/kite-1.6-1m