Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

DavidAU
/
GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF

Text Generation
GGUF
English
Chinese
GLM 4.7 Flash
thinking
reasoning
NEO Imatrix
MAX Quants
16 bit precision output tensor
heretic
uncensored
abliterated
deep reasoning
fine tune
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
imatrix
conversational
Model card Files Files and versions
xet
Community
9

Instructions to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • llama-cpp-python

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with llama-cpp-python:

    # !pip install llama-cpp-python
    
    from llama_cpp import Llama
    
    llm = Llama.from_pretrained(
    	repo_id="DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF",
    	filename="GLM-4.7-Flash-Uncen-Hrt-NEO-CODE-MAX-imat-D_AU-IQ2_M.gguf",
    )
    
    llm.create_chat_completion(
    	messages = [
    		{
    			"role": "user",
    			"content": "What is the capital of France?"
    		}
    	]
    )
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • llama.cpp

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with llama.cpp:

    Install from brew
    brew install llama.cpp
    # Start a local OpenAI-compatible server with a web UI:
    llama-server -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    # Run inference directly in the terminal:
    llama-cli -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Install from WinGet (Windows)
    winget install llama.cpp
    # Start a local OpenAI-compatible server with a web UI:
    llama-server -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    # Run inference directly in the terminal:
    llama-cli -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Use pre-built binary
    # Download pre-built binary from:
    # https://github.com/ggerganov/llama.cpp/releases
    # Start a local OpenAI-compatible server with a web UI:
    ./llama-server -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    # Run inference directly in the terminal:
    ./llama-cli -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Build from source code
    git clone https://github.com/ggerganov/llama.cpp.git
    cd llama.cpp
    cmake -B build
    cmake --build build -j --target llama-server llama-cli
    # Start a local OpenAI-compatible server with a web UI:
    ./build/bin/llama-server -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    # Run inference directly in the terminal:
    ./build/bin/llama-cli -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Use Docker
    docker model run hf.co/DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
  • LM Studio
  • Jan
  • vLLM

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with vLLM:

    Install from pip and serve model
    # Install vLLM from pip:
    pip install vllm
    # Start the vLLM server:
    vllm serve "DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF"
    # Call the server using curl (OpenAI-compatible API):
    curl -X POST "http://localhost:8000/v1/chat/completions" \
    	-H "Content-Type: application/json" \
    	--data '{
    		"model": "DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF",
    		"messages": [
    			{
    				"role": "user",
    				"content": "What is the capital of France?"
    			}
    		]
    	}'
    Use Docker
    docker model run hf.co/DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
  • Ollama

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with Ollama:

    ollama run hf.co/DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
  • Unsloth Studio new

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with Unsloth Studio:

    Install Unsloth Studio (macOS, Linux, WSL)
    curl -fsSL https://unsloth.ai/install.sh | sh
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF to start chatting
    Install Unsloth Studio (Windows)
    irm https://unsloth.ai/install.ps1 | iex
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF to start chatting
    Using HuggingFace Spaces for Unsloth
    # No setup required
    # Open https://huggingface.co/spaces/unsloth/studio in your browser
    # Search for DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF to start chatting
  • Pi new

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with Pi:

    Start the llama.cpp server
    # Install llama.cpp:
    brew install llama.cpp
    # Start a local OpenAI-compatible server:
    llama-server -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Configure the model in Pi
    # Install Pi:
    npm install -g @mariozechner/pi-coding-agent
    # Add to ~/.pi/agent/models.json:
    {
      "providers": {
        "llama-cpp": {
          "baseUrl": "http://localhost:8080/v1",
          "api": "openai-completions",
          "apiKey": "none",
          "models": [
            {
              "id": "DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M"
            }
          ]
        }
      }
    }
    Run Pi
    # Start Pi in your project directory:
    pi
  • Hermes Agent new

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with Hermes Agent:

    Start the llama.cpp server
    # Install llama.cpp:
    brew install llama.cpp
    # Start a local OpenAI-compatible server:
    llama-server -hf DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Configure Hermes
    # Install Hermes:
    curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash
    hermes setup
    # Point Hermes at the local server:
    hermes config set model.provider custom
    hermes config set model.base_url http://127.0.0.1:8080/v1
    hermes config set model.default DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Run Hermes
    hermes
  • Docker Model Runner

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with Docker Model Runner:

    docker model run hf.co/DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
  • Lemonade

    How to use DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF with Lemonade:

    Pull the model
    # Download Lemonade from https://lemonade-server.ai/
    lemonade pull DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Q4_K_M
    Run and chat with the model
    lemonade run user.GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF-Q4_K_M
    List all available models
    lemonade list
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

need its safetensors file

3
#9 opened 5 days ago by
tiegang

Hoping someone can finally help me with this

5
#8 opened 24 days ago by
ElvisM

The Partner, Not only the Tool ...

3
#7 opened 2 months ago by
ScydX

Install & run this model easily using llmpm

#6 opened 2 months ago by
sarthak-saxena

It is yapping, it is uncensored and it is fine :-D

πŸ”₯πŸ‘ 2
9
#3 opened 4 months ago by
Merlinoz11

Brainstorm

6
#2 opened 4 months ago by
dwadaadadqq

Oh hey, that's me!

πŸ‘ 5
1
#1 opened 4 months ago by
Olafangensan
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs