MLX UnfilteredAI-1B Model Card
This is the MLX version of the UnfilteredAI-1B model, optimized for efficient inference on Apple Silicon devices using Apple's MLX framework.
Model Details
Model Description
The MLX UnfilteredAI-1B is a converted version of the original UnfilteredAI-1B text generation model, adapted for Apple's MLX machine learning framework. This allows for fast and efficient text generation on macOS devices with M-series chips, leveraging the native capabilities of Apple Silicon.
- Developed by: UnfilteredAI
- Model type: Text generation language model
- Language(s) (NLP): English (primary), supports other languages
- License: MIT
- Base model: UnfilteredAI/UNfilteredAI-1B
Model Sources
- Repository: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B
- MLX Conversion: This repository
Uses
Direct Use
This model can be used for text generation tasks such as:
- Creative writing
- Conversational AI
- Educational and research applications
- Content generation without traditional filters
Out-of-Scope Use
The model is uncensored and may generate sensitive, controversial, or harmful content. It should not be used for:
- Generating illegal or unethical content
- Misinformation or propaganda
- Any applications requiring content moderation
Bias, Risks, and Limitations
As an uncensored model, it may exhibit biases present in the training data and generate inappropriate content. Users should be aware of potential risks including:
- Generation of biased or offensive text
- Potential for misuse in harmful applications
- Inconsistencies in output quality
Recommendations
Use responsibly and with caution. Implement appropriate safeguards if deploying in production environments.
How to Get Started with the Model
To use this model, you'll need to install the MLX library and mlx-lm:
# uv version
uv init .
uv add mlx mlx-lm
# pip version
pip install mlx mlx-lm
Then, load and use the model:
from mlx_lm import load, generate
# Load the model from hugging-face
model, tokenizer = load("Vlor999/mlx-UNfilteredAI-1B")
# Generate text
prompt = "Hello, how are you?"
response = generate(model, tokenizer, prompt, max_tokens=100, verbose=True)
print(response)
Then run the model:
uv run python filename.py
# or by using the environment
source .venv/bin/activate
python filename.py
Or directly using mlx-lm:
uv run mlx_lm.chat --model Vlor999/mlx-UNfilteredAI-1B
# Example:
uv run mlx_lm.chat --model Vlor999/mlx-UNfilteredAI-1B --max-tokens=4096
# to have more informations about how to use mlx_lm you can run:
uv run mlx_lm.chat --help
- Downloads last month
- 70
Quantized
Model tree for Vlor999/mlx-UNfilteredAI-1B
Base model
UnfilteredAI/UNfilteredAI-1B