GGUF Files for Shadowncensored-1B
These are the GGUF files for NovaCorp/Shadowncensored-1B.
Downloads
| GGUF Link | Quantization | Description |
|---|---|---|
| Download | Q2_K | Lowest quality |
| Download | Q3_K_S | |
| Download | IQ3_S | Integer quant, preferable over Q3_K_S |
| Download | IQ3_M | Integer quant |
| Download | Q3_K_M | |
| Download | Q3_K_L | |
| Download | IQ4_XS | Integer quant |
| Download | Q4_K_S | Fast with good performance |
| Download | Q4_K_M | Recommended: Perfect mix of speed and performance |
| Download | Q5_K_S | |
| Download | Q5_K_M | |
| Download | Q6_K | Very good quality |
| Download | Q8_0 | Best quality |
| Download | f16 | Full precision, don't bother; use a quant |
Note from Flexan
I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet, usually for models I deem interesting and wish to try out.
If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding this model, please refer to the original model repo.
You can find more info about me and what I do here.
Shadowncensored-1B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
# Umbrella Corporation Official Merge Protocol v3.2
# Author: Dr. Novaciano
# Objective: Test
# PROJECT: Shadowncensored-3.2-1B
models:
- model: lunahr/gemma-3-1b-it-abliterated # Experimental viral strain neural imprint
- model: Echo9Zulu/Shadows-Gemma-3-1B # Baseline cognitive template, "safe mode"
merge_method: slerp # Spherical Linear Interpolation to preserve extreme viral traits smoothly
base_model: lunahr/gemma-3-1b-it-abliterated # Anchor model for stable latent space
dtype: bfloat16 # Memory-efficient precision, minimal loss in viral feature fidelity
parameters:
t: 0.45
normalize: false
rescale: true
rescale_factor: 1.12
memory_efficient: true
low_cpu_mem_usage: true
layer_range:
- value: [4, 22]
tie_word_embeddings: true
tie_output_embeddings: true
- Downloads last month
- 1,527
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Flexan/NovaCorp-Shadowncensored-1B-GGUF
Base model
NovaCorp/Shadowncensored-1B