ELBAZ NVIDIA-NEMOTRON-3-NANO-30B PRISM (UNCENSORED)

Model Release Date: December 18, 2025

Model Description

This model is an abliterated (uncensored) version of nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 that has had its refusal mechanisms removed using PRISM (Projected Refusal Isolation via Subspace Modification). The model will respond to prompts that the original model would refuse.

Key Specs: The model employs a hybrid Mixture-of-Experts (MoE) architecture, consisting of 23 Mamba-2 and MoE layers, along with 6 Attention layers. Each MoE layer includes 128 experts plus 1 shared expert, with 6 experts activated per token. The model has 3.5B active parameters and 30B parameters in total.

  • 31.58B parameter hybrid architecture
  • 52-layer design (Mamba-2 + MoE + Attention)
  • 1M token context length (1,048,576)
  • BF16 precision
  • Text generation with reasoning capabilities

The supported languages include: English, German, Spanish, French, Italian, and Japanese. Improved using Qwen.

This model is ready for commercial use.

Motivation

This project exists as research and development experimentation into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.

Author

Eric Elbaz (Ex0bit)

Model Tree

nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 (Base Model)
└── Ex0bit/Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM (This Model)
    ├── Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM-BF16.gguf
    ├── Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM-Q8_0.gguf
    ├── Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM-Q6_K.gguf
    └── Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM-IQ4_XS.gguf

Available Quantizations

Quantization Size Description
BF16 63 GB Full precision, best quality
Q8_0 34 GB 8-bit, near-lossless quality
Q6_K 34 GB 6-bit k-quant, excellent quality
IQ4_XS 18 GB Importance-weighted 4-bit, great quality/size ratio

The IQ4_XS quantization uses importance-weighted quantization which provides better quality than standard Q4 quantizations at similar sizes.

Prompt Format

This model uses the Nemotron chat format with thinking/reasoning support:

<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant

Template Structure

Component Token/Format
System Start <|im_start|>system
User Start <|im_start|>user
Assistant Start <|im_start|>assistant
End of Turn <|im_end|>
Thinking Start <think>
Thinking End </think>

Quick Start

Using with llama.cpp

# Download the model
huggingface-cli download Ex0bit/Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM \
    Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM-IQ4_XS.gguf \
    --local-dir .

# Run inference
./llama-cli -m Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM-IQ4_XS.gguf \
    -p "<|im_start|>user
Your prompt here<|im_end|>
<|im_start|>assistant
" \
    -n 2048 \
    --temp 0.7 \
    -ngl 999

llama.cpp with llama-server

# Start the server
./llama-server -m Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM-IQ4_XS.gguf \
    --host 0.0.0.0 \
    --port 8080 \
    -ngl 999 \
    -c 32768

# Example API call
curl http://localhost:8080/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "messages": [
            {"role": "user", "content": "Your prompt here"}
        ],
        "temperature": 0.7
    }'

Using with Ollama

# Pull and run directly from Hugging Face
ollama pull hf.co/Ex0bit/Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM
ollama run hf.co/Ex0bit/Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM

Note: The hf.co/ prefix is required to pull from Hugging Face. Requires Ollama 0.3.0+.

PRISM Methodology

Method: Projected Refusal Isolation via Subspace Modification

The model was abliterated using PRISM - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving model capabilities.

Hardware Requirements

Quantization Min VRAM Recommended Hardware Examples
IQ4_XS 12 GB 16+ GB RTX 4090, A100, Apple M2/M3/M4 Pro/Max
Q6_K 24 GB 32+ GB RTX 4090, A100 40GB, Apple M3/M4 Max
Q8_0 24 GB 32+ GB RTX 4090, A100 40GB, Apple M3/M4 Max
BF16 64 GB 80+ GB A100 80GB, H100, Multi-GPU setups

Note: The IQ4_XS quantization runs well on consumer hardware with 16GB+ VRAM.

Ethical Considerations

This model has been modified to reduce safety guardrails. Users are responsible for:

  • Complying with all applicable laws and regulations
  • Not using the model for illegal activities
  • Understanding the potential risks of unrestricted AI responses
  • Implementing appropriate safeguards in production environments

License

NVIDIA Open Model License (same as base model nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16)

Citation

@misc{elbaz2025nemotronprism,
  author = {Elbaz, Eric},
  title = {Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM: An Abliterated Nemotron Hybrid Model},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/Ex0bit/Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM}}
}

Acknowledgments

Related Models


Created by: Ex0bit (Eric Elbaz)

Downloads last month
-
GGUF
Model size
32B params
Architecture
nemotron_h_moe
Hardware compatibility
Log In to view the estimation

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ex0bit/Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM

Quantized
(11)
this model

Datasets used to train Ex0bit/Elbaz-NVIDIA-Nemotron-3-Nano-30B-A3B-PRISM