How to use Open4bits/Mixtral-8x7B-Instruct-v0.1-mlx-4Bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Mixtral-8x7B-Instruct-v0.1-mlx-4Bit Open4bits/Mixtral-8x7B-Instruct-v0.1-mlx-4Bit
Chat template
Files info
4-bit
Base model