Ornstein-hermes-3.6-27b

Ornstein-hermes-3.6-27b — GGUF Quantizations

GGUF quantizations of GestaltLabs/Ornstein-hermes-3.6-27b — a Hermes-format function-calling fine-tune of Ornstein-3.6-27B (Qwen 3.6 27B multimodal).

All K- and I-quants are calibrated with an imatrix computed from 800 high-quality Hermes-format tool-use conversations sampled from DJLougen/Acta-Synthetic — so the quantization gradients are tuned for tool-calling distributions, not generic web text.

Support This Work

I'm a PhD student in visual neuroscience at the University of Toronto who also happens to spend way too much time fine-tuning, merging, and quantizing open-weight models on rented H100s and a local DGX Spark. All training compute is self-funded — balancing GPU costs against a student budget. If my uploads have been useful to you, consider buying a PhD student a coffee. It goes a long way toward keeping these experiments running.

Support on Ko-fi


Available Quants

Quant Bits/weight Size Notes
Q8_0 ~8.5 26.6 GB Near-lossless. Use if you have ≥32 GB VRAM/RAM.
Q6_K ~6.6 20.6 GB High fidelity, very small loss vs F16.
Q5_K_M ~5.7 17.9 GB Strong default for ≥24 GB cards.
Q4_K_M ~4.8 15.4 GB Most popular 4-bit; great quality/size tradeoff.
IQ4_NL ~4.5 14.7 GB imatrix-aware non-linear 4-bit, smaller than Q4_K_M.
IQ4_XS ~4.3 14.0 GB Smallest 4-bit; minor quality drop vs Q4_K_M.
Q3_K_M ~3.9 12.4 GB Aggressive but usable; ≥16 GB VRAM.
IQ3_M ~3.7 11.7 GB imatrix 3-bit; better than Q3_K_M at similar size.
IQ2_M ~2.7 9.3 GB Tight VRAM budget; expect noticeable degradation.

Picking a quant

  • 24 GB GPU (e.g. RTX 3090/4090)Q4_K_M or IQ4_NL
  • 32 GB (e.g. RTX 5090)Q5_K_M
  • 48 GB (e.g. RTX A6000)Q6_K
  • 80 GB (H100/A100)Q8_0
  • CPU-only with 32 GB RAMIQ4_XS or Q3_K_M
  • 16 GB VRAMIQ3_M or IQ2_M

Usage

llama.cpp

./llama-cli -m Ornstein-hermes-3.6-27b-Q4_K_M.gguf \
  -ngl 999 \
  -c 8192 \
  --temp 0.7 \
  -p "<|im_start|>user\nWhat's the weather in Tokyo?<|im_end|>\n<|im_start|>assistant\n"

For tool calling, register tools via the --chat-template system prompt or use the OpenAI-compatible server (llama-server) which handles tool registration automatically.

Ollama

ollama create ornstein-hermes-q4 -f - <<EOF
FROM ./Ornstein-hermes-3.6-27b-Q4_K_M.gguf
TEMPLATE """{{- range .Messages }}<|im_start|>{{ .Role }}
{{ .Content }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
PARAMETER stop "<|im_end|>"
EOF

ollama run ornstein-hermes-q4

LM Studio

  1. Download any GGUF from this repo
  2. Open in LM Studio (auto-detects Qwen3 chat template)
  3. Use the built-in tool-calling interface

Hermes Tool-Calling Format

The model was trained on Hermes-style function calling. Expected message flow:

<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags.
<tools>
[{"name": "get_weather", "description": "...", "parameters": {...}}]
</tools>
<|im_end|>
<|im_start|>user
What's the weather in Tokyo?<|im_end|>
<|im_start|>assistant
<think>The user wants weather info. I'll call get_weather.</think>
<tool_call>{"name": "get_weather", "arguments": {"city": "Tokyo"}}</tool_call><|im_end|>
<|im_start|>tool
<tool_response>{"temp_c": 18, "condition": "cloudy"}</tool_response><|im_end|>
<|im_start|>assistant
It's 18°C and cloudy in Tokyo.<|im_end|>

Quantization Details

Source GestaltLabs/Ornstein-hermes-3.6-27b (bf16)
F16 GGUF size 53.8 GB (851 tensors)
Tool llama.cpp (latest)
imatrix corpus 800 conversations from DJLougen/Acta-Synthetic, passes_thresholds=True, rendered with the Qwen3.6 chat template (~385K tokens, 1.74 MB)
imatrix params --n-gpu-layers 999 -c 4096 -b 4096 --chunks 200
Hardware 1× NVIDIA RTX PRO 6000 Blackwell

License

Apache 2.0 — inherited from Qwen 3.6 base.

Citation

If you use this model, please consider citing the dataset:

@dataset{lougen_acta_2026,
  author = {DJLougen},
  title = {Acta: A Premium Curated Sample of High-Quality Agentic Tool-Use Conversations},
  year = {2026},
  url = {https://huggingface.co/datasets/DJLougen/Acta}
}

Downloads last month
1,956
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GestaltLabs/Ornstein-Hermes-3.6-27b-GGUF

Base model

Qwen/Qwen3.6-27B
Quantized
(8)
this model

Datasets used to train GestaltLabs/Ornstein-Hermes-3.6-27b-GGUF