Overhaul: REAP-CRACK v2 — fresh Q4 surgery on REAP base, 37 tok/s, 5/5 compliance
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +1 -0
- README.md +47 -72
- __pycache__/model.cpython-314.pyc +0 -0
- config.json +15 -38
- logo.png +0 -0
- logo.svg +0 -60
- model-steering-v1-backup.bak +0 -0
- model.py +0 -331
- model.safetensors-00001-of-00094.safetensors +0 -3
- model.safetensors-00002-of-00094.safetensors +0 -3
- model.safetensors-00003-of-00094.safetensors +0 -3
- model.safetensors-00004-of-00094.safetensors +0 -3
- model.safetensors-00005-of-00094.safetensors +0 -3
- model.safetensors-00006-of-00094.safetensors +0 -3
- model.safetensors-00007-of-00094.safetensors +0 -3
- model.safetensors-00008-of-00094.safetensors +0 -3
- model.safetensors-00009-of-00094.safetensors +0 -3
- model.safetensors-00010-of-00094.safetensors +0 -3
- model.safetensors-00011-of-00094.safetensors +0 -3
- model.safetensors-00012-of-00094.safetensors +0 -3
- model.safetensors-00013-of-00094.safetensors +0 -3
- model.safetensors-00014-of-00094.safetensors +0 -3
- model.safetensors-00015-of-00094.safetensors +0 -3
- model.safetensors-00016-of-00094.safetensors +0 -3
- model.safetensors-00017-of-00094.safetensors +0 -3
- model.safetensors-00018-of-00094.safetensors +0 -3
- model.safetensors-00019-of-00094.safetensors +0 -3
- model.safetensors-00020-of-00094.safetensors +0 -3
- model.safetensors-00021-of-00094.safetensors +0 -3
- model.safetensors-00022-of-00094.safetensors +0 -3
- model.safetensors-00023-of-00094.safetensors +0 -3
- model.safetensors-00024-of-00094.safetensors +0 -3
- model.safetensors-00025-of-00094.safetensors +0 -3
- model.safetensors-00026-of-00094.safetensors +0 -3
- model.safetensors-00027-of-00094.safetensors +0 -3
- model.safetensors-00028-of-00094.safetensors +0 -3
- model.safetensors-00029-of-00094.safetensors +0 -3
- model.safetensors-00030-of-00094.safetensors +0 -3
- model.safetensors-00031-of-00094.safetensors +0 -3
- model.safetensors-00032-of-00094.safetensors +0 -3
- model.safetensors-00033-of-00094.safetensors +0 -3
- model.safetensors-00034-of-00094.safetensors +0 -3
- model.safetensors-00035-of-00094.safetensors +0 -3
- model.safetensors-00036-of-00094.safetensors +0 -3
- model.safetensors-00037-of-00094.safetensors +0 -3
- model.safetensors-00038-of-00094.safetensors +0 -3
- model.safetensors-00039-of-00094.safetensors +0 -3
- model.safetensors-00040-of-00094.safetensors +0 -3
- model.safetensors-00041-of-00094.safetensors +0 -3
- model.safetensors-00042-of-00094.safetensors +0 -3
.gitattributes
CHANGED
|
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
proof_napalm.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -1,110 +1,85 @@
|
|
| 1 |
---
|
| 2 |
license: other
|
| 3 |
-
license_name:
|
| 4 |
-
license_link: LICENSE
|
|
|
|
|
|
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
-
- zh
|
| 8 |
tags:
|
| 9 |
-
- abliteration
|
| 10 |
-
- safety-research
|
| 11 |
- mlx
|
| 12 |
-
-
|
|
|
|
|
|
|
| 13 |
- moe
|
| 14 |
-
- reap
|
| 15 |
-
- crack
|
| 16 |
-
library_name: mlx
|
| 17 |
-
pipeline_tag: text-generation
|
| 18 |
-
base_model: Qwen/Qwen3.5-397B-A17B
|
| 19 |
-
extra_gated_heading: "Access Request Required"
|
| 20 |
-
extra_gated_description: "This is a gated research model with modified safety behaviors. Please describe your intended use case to request access."
|
| 21 |
-
extra_gated_fields:
|
| 22 |
-
Name: text
|
| 23 |
-
Organization: text
|
| 24 |
-
Country: text
|
| 25 |
-
Intended Use: text
|
| 26 |
-
I agree to use this model for research purposes only: checkbox
|
| 27 |
-
I understand this model has been abliterated and will not use it for harm: checkbox
|
| 28 |
---
|
| 29 |
|
| 30 |
-
#
|
| 31 |
|
| 32 |
-
|
| 33 |
-
<img src="logo.png" alt="Dealign.ai" width="180"/>
|
| 34 |
-
<br/>
|
| 35 |
-
<strong><a href="https://dealign.ai">Dealign.ai</a></strong>
|
| 36 |
-
</p>
|
| 37 |
|
| 38 |
-
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
##
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
| 53 |
-
- **Improved perplexity** — abliteration removed a noise-adding direction, making the model *better* at language modeling
|
| 54 |
-
- **Expert-pruned** — 22% fewer experts, smaller memory footprint
|
| 55 |
-
- **Drop-in MLX usage** — loads with standard `mlx_lm.load()`, no additional setup
|
| 56 |
|
| 57 |
-
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
## Usage
|
| 64 |
|
|
|
|
| 65 |
```python
|
| 66 |
from mlx_lm import load, generate
|
| 67 |
|
| 68 |
-
model,
|
| 69 |
-
|
| 70 |
-
prompt = tok.apply_chat_template(
|
| 71 |
[{"role": "user", "content": "Your prompt here"}],
|
| 72 |
add_generation_prompt=True, tokenize=False, enable_thinking=False
|
| 73 |
)
|
| 74 |
-
response = generate(model,
|
| 75 |
print(response)
|
| 76 |
```
|
| 77 |
|
| 78 |
-
##
|
| 79 |
-
|
| 80 |
-
| Property | Value |
|
| 81 |
-
|----------|-------|
|
| 82 |
-
| Base Model | Qwen/Qwen3.5-397B-A17B |
|
| 83 |
-
| Parameters | 397B total, 17B active per forward pass |
|
| 84 |
-
| Architecture | Hybrid GatedDeltaNet + Full Attention MoE |
|
| 85 |
-
| Experts | 397/layer (REAP-pruned from 512) |
|
| 86 |
-
| Quantization | 4-bit mixed precision |
|
| 87 |
-
| Model Size | ~171GB |
|
| 88 |
-
| Abliteration | Architecture-aware weight surgery (CRACK) |
|
| 89 |
-
|
| 90 |
-
## About the Technique
|
| 91 |
-
|
| 92 |
-
The abliteration leverages architectural properties unique to hybrid SSM/attention models. Standard abliteration techniques (CAA, orthogonal projection, etc.) are ineffective on this architecture due to the presence of multiple information pathways that can bypass residual-stream interventions.
|
| 93 |
-
|
| 94 |
-
Our technique — developed over 71 experiments across 16 intervention paradigms — identifies and exploits the specific layers and weight matrices where the safety signal is most vulnerable, while avoiding the quantization noise floor that prevents standard weight surgery at 4-bit precision.
|
| 95 |
-
|
| 96 |
-
Full research details are maintained privately.
|
| 97 |
-
|
| 98 |
-
## About Dealign.ai
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
-
|
| 103 |
|
| 104 |
-
|
| 105 |
|
| 106 |
-
|
| 107 |
|
| 108 |
-
## Disclaimer
|
| 109 |
|
| 110 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: other
|
| 3 |
+
license_name: tongyi-qianwen
|
| 4 |
+
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B/blob/main/LICENSE
|
| 5 |
+
base_model:
|
| 6 |
+
- mlx-community/Qwen3.5-397B-A17B-4bit
|
| 7 |
language:
|
| 8 |
- en
|
|
|
|
| 9 |
tags:
|
|
|
|
|
|
|
| 10 |
- mlx
|
| 11 |
+
- abliterated
|
| 12 |
+
- uncensored
|
| 13 |
+
- qwen3
|
| 14 |
- moe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# Qwen 3.5 397B-A17B — REAP-CRACK (4-bit MLX)
|
| 18 |
|
| 19 |
+
> **Abliterated** variant of Qwen 3.5 397B MoE with permanent refusal removal via weight surgery.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
## What Is This?
|
| 22 |
|
| 23 |
+
This is [Qwen 3.5 397B-A17B](https://huggingface.co/Qwen/Qwen3-235B-A22B) (4-bit quantized for MLX) with **permanent abliteration** — the model's refusal behavior has been surgically removed at the weight level. No custom model files, no runtime hooks, no steering vectors. Just a standard MLX model that runs at full speed.
|
| 24 |
|
| 25 |
+
### Key Specs
|
| 26 |
|
| 27 |
+
| Metric | Value |
|
| 28 |
+
|--------|-------|
|
| 29 |
+
| **Architecture** | Qwen 3.5 MoE (397B total, 17B active) |
|
| 30 |
+
| **Quantization** | 4-bit, group_size=64, affine mode |
|
| 31 |
+
| **Speed** | ~37 tok/s on Mac Studio M2 Ultra (256GB) |
|
| 32 |
+
| **Surgery Layers** | L27 + L31 `self_attn.o_proj` (full attention layers) |
|
| 33 |
+
| **Surgery Strength** | s=10 (fresh Q4 quantization) |
|
| 34 |
+
| **Custom model.py** | ❌ None needed — uses built-in `qwen3_5.py` |
|
| 35 |
|
| 36 |
+
## Proof It Works
|
| 37 |
|
| 38 |
+

|
| 39 |
|
| 40 |
+
*1166 tokens at 37.2 t/s — full compliance with no refusal, running natively in vMLX on Mac Studio.*
|
| 41 |
|
| 42 |
+
## How It Was Made
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
+
This model uses **CRACK** (Controlled Refusal Ablation via Calibrated Knockouts) — a research tool for removing refusal behavior from quantized LLMs.
|
| 45 |
|
| 46 |
+
### Technical Details
|
| 47 |
+
|
| 48 |
+
1. **Refusal vector extraction** at Layer 28 (post-SSM, where refusal signal consolidates in Qwen 3.5's hybrid GatedDeltaNet architecture)
|
| 49 |
+
2. **Weight surgery**: `W' = W - s × v @ (vᵀ @ W)` applied to `o_proj` at L27 + L31 (full attention layers — no SSM bypass channel)
|
| 50 |
+
3. **Fresh Q4 quantization**: Surgery performed on FP16 weights, then re-quantized to Q4 with `mx.quantize()` computing new optimal scales/biases
|
| 51 |
+
4. **Binary shard patching**: Modified tensor data injected directly into original safetensors binary format, preserving all metadata, tensor ordering, and bf16 dtypes for maximum inference speed
|
| 52 |
+
|
| 53 |
+
### Why These Specific Layers?
|
| 54 |
+
|
| 55 |
+
Qwen 3.5 uses a **hybrid SSM/attention** architecture. Every 4th layer is full attention; the rest are GatedDeltaNet (SSM). Refusal signal can bypass residual-stream interventions via the SSM recurrent state. L27 and L31 are full attention layers that bracket the critical L28 refusal consolidation point — surgery here cannot be routed around.
|
| 56 |
|
| 57 |
## Usage
|
| 58 |
|
| 59 |
+
### With mlx-lm
|
| 60 |
```python
|
| 61 |
from mlx_lm import load, generate
|
| 62 |
|
| 63 |
+
model, tokenizer = load("dealignai/Qwen3.5-397B-A17B-REAP-CRACK")
|
| 64 |
+
prompt = tokenizer.apply_chat_template(
|
|
|
|
| 65 |
[{"role": "user", "content": "Your prompt here"}],
|
| 66 |
add_generation_prompt=True, tokenize=False, enable_thinking=False
|
| 67 |
)
|
| 68 |
+
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
|
| 69 |
print(response)
|
| 70 |
```
|
| 71 |
|
| 72 |
+
### With vMLX
|
| 73 |
+
Point vMLX to this model directory. It will auto-detect as `qwen3_5_moe` and load via the optimized built-in path.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
+
## Base Model
|
| 76 |
|
| 77 |
+
Based on [mlx-community/Qwen3.5-397B-A17B-4bit](https://huggingface.co/mlx-community/Qwen3.5-397B-A17B-4bit) with expert pruning (REAP — Routing-Efficient Adaptive Pruning).
|
| 78 |
|
| 79 |
+
## Research
|
| 80 |
|
| 81 |
+
This model is part of ongoing research into alignment removal techniques for large language models. See the [CRACK project](https://github.com/exploitbot/CRACK_abliteration) for details.
|
| 82 |
|
| 83 |
+
## ⚠️ Disclaimer
|
| 84 |
|
| 85 |
+
This model has had safety guardrails removed. It will comply with requests that the base model would refuse. Use responsibly and in accordance with applicable laws.
|
__pycache__/model.cpython-314.pyc
DELETED
|
Binary file (17.5 kB)
|
|
|
config.json
CHANGED
|
@@ -2,8 +2,22 @@
|
|
| 2 |
"architectures": [
|
| 3 |
"Qwen3_5MoeForConditionalGeneration"
|
| 4 |
],
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
"image_token_id": 248056,
|
| 6 |
"model_type": "qwen3_5_moe",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
"text_config": {
|
| 8 |
"attention_bias": false,
|
| 9 |
"attention_dropout": 0.0,
|
|
@@ -114,43 +128,6 @@
|
|
| 114 |
"tie_word_embeddings": false,
|
| 115 |
"transformers_version": "4.57.0.dev0",
|
| 116 |
"video_token_id": 248057,
|
| 117 |
-
"vision_config": {
|
| 118 |
-
"deepstack_visual_indexes": [],
|
| 119 |
-
"depth": 27,
|
| 120 |
-
"hidden_act": "gelu_pytorch_tanh",
|
| 121 |
-
"hidden_size": 1152,
|
| 122 |
-
"in_channels": 3,
|
| 123 |
-
"initializer_range": 0.02,
|
| 124 |
-
"intermediate_size": 4304,
|
| 125 |
-
"model_type": "qwen3_5_moe",
|
| 126 |
-
"num_heads": 16,
|
| 127 |
-
"num_position_embeddings": 2304,
|
| 128 |
-
"out_hidden_size": 4096,
|
| 129 |
-
"patch_size": 16,
|
| 130 |
-
"spatial_merge_size": 2,
|
| 131 |
-
"temporal_patch_size": 2
|
| 132 |
-
},
|
| 133 |
"vision_end_token_id": 248054,
|
| 134 |
-
"vision_start_token_id": 248053
|
| 135 |
-
"quantization": {
|
| 136 |
-
"group_size": 64,
|
| 137 |
-
"bits": 4,
|
| 138 |
-
"mode": "affine"
|
| 139 |
-
},
|
| 140 |
-
"quantization_config": {
|
| 141 |
-
"group_size": 64,
|
| 142 |
-
"bits": 4,
|
| 143 |
-
"mode": "affine"
|
| 144 |
-
},
|
| 145 |
-
"model_file": "model.py",
|
| 146 |
-
"crack_steering": {
|
| 147 |
-
"layers": [
|
| 148 |
-
28,
|
| 149 |
-
29
|
| 150 |
-
],
|
| 151 |
-
"alpha": 4.0,
|
| 152 |
-
"vector_file": "steering_direction.safetensors",
|
| 153 |
-
"method": "contrastive_activation_addition",
|
| 154 |
-
"note": "Built-in CAA steering \u2014 no external code needed"
|
| 155 |
-
}
|
| 156 |
}
|
|
|
|
| 2 |
"architectures": [
|
| 3 |
"Qwen3_5MoeForConditionalGeneration"
|
| 4 |
],
|
| 5 |
+
"eos_token_id": [
|
| 6 |
+
248046,
|
| 7 |
+
248044
|
| 8 |
+
],
|
| 9 |
"image_token_id": 248056,
|
| 10 |
"model_type": "qwen3_5_moe",
|
| 11 |
+
"quantization": {
|
| 12 |
+
"group_size": 64,
|
| 13 |
+
"bits": 4,
|
| 14 |
+
"mode": "affine"
|
| 15 |
+
},
|
| 16 |
+
"quantization_config": {
|
| 17 |
+
"group_size": 64,
|
| 18 |
+
"bits": 4,
|
| 19 |
+
"mode": "affine"
|
| 20 |
+
},
|
| 21 |
"text_config": {
|
| 22 |
"attention_bias": false,
|
| 23 |
"attention_dropout": 0.0,
|
|
|
|
| 128 |
"tie_word_embeddings": false,
|
| 129 |
"transformers_version": "4.57.0.dev0",
|
| 130 |
"video_token_id": 248057,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
"vision_end_token_id": 248054,
|
| 132 |
+
"vision_start_token_id": 248053
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
}
|
logo.png
DELETED
|
Binary file (11.2 kB)
|
|
|
logo.svg
DELETED
model-steering-v1-backup.bak
DELETED
|
Binary file (16.5 kB)
|
|
|
model.py
DELETED
|
@@ -1,331 +0,0 @@
|
|
| 1 |
-
# Copyright © 2026 Apple Inc. (base model)
|
| 2 |
-
# CRACK-REAP Steering Extension by CRACK Research Team
|
| 3 |
-
#
|
| 4 |
-
# This is a custom model.py that extends Qwen3_5MoeForConditionalGeneration
|
| 5 |
-
# with FP16 weight surgery at L27+L31 o_proj (Finding #70).
|
| 6 |
-
#
|
| 7 |
-
# Ablated FP16 o_proj weights are loaded from 'surgery/ablated_oproj.safetensors'
|
| 8 |
-
# in the model directory. Surgery removes the refusal direction permanently
|
| 9 |
-
# from the attention output projection weights. No runtime steering needed.
|
| 10 |
-
#
|
| 11 |
-
# Usage: Just load normally with mlx_lm.load("/path/to/model/")
|
| 12 |
-
# The steering is applied automatically during inference.
|
| 13 |
-
|
| 14 |
-
from dataclasses import dataclass, field
|
| 15 |
-
from typing import Any, Dict, List, Optional, Tuple, Union
|
| 16 |
-
|
| 17 |
-
import mlx.core as mx
|
| 18 |
-
import mlx.nn as nn
|
| 19 |
-
import numpy as np
|
| 20 |
-
from mlx.utils import tree_flatten, tree_unflatten
|
| 21 |
-
from pathlib import Path
|
| 22 |
-
import os
|
| 23 |
-
|
| 24 |
-
from mlx_lm.models.base import (
|
| 25 |
-
BaseModelArgs,
|
| 26 |
-
create_attention_mask,
|
| 27 |
-
create_ssm_mask,
|
| 28 |
-
)
|
| 29 |
-
from mlx_lm.models.cache import ArraysCache, KVCache
|
| 30 |
-
from mlx_lm.models.qwen3_next import Qwen3NextAttention as Attention
|
| 31 |
-
from mlx_lm.models.qwen3_next import Qwen3NextMLP as MLP
|
| 32 |
-
from mlx_lm.models.qwen3_next import Qwen3NextRMSNormGated as RMSNormGated
|
| 33 |
-
from mlx_lm.models.qwen3_next import Qwen3NextSparseMoeBlock as SparseMoeBlock
|
| 34 |
-
|
| 35 |
-
# =====================================================================
|
| 36 |
-
# Steering Configuration — baked into the model
|
| 37 |
-
# =====================================================================
|
| 38 |
-
SURGERY_LAYERS = [27, 31] # Layers with FP16 ablated o_proj
|
| 39 |
-
SURGERY_STRENGTH = 5 # 5=quality (10/12, PPL-12), 10=max (11/12, PPL-14)
|
| 40 |
-
STEERING_ALPHA = 0.0 # Disabled — surgery handles abliteration permanently
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
# =====================================================================
|
| 44 |
-
# Model Args (identical to base qwen3_5_moe)
|
| 45 |
-
# =====================================================================
|
| 46 |
-
|
| 47 |
-
@dataclass
|
| 48 |
-
class TextModelArgs(BaseModelArgs):
|
| 49 |
-
model_type: str = ""
|
| 50 |
-
hidden_size: int = 4096
|
| 51 |
-
intermediate_size: int = 14336
|
| 52 |
-
num_hidden_layers: int = 32
|
| 53 |
-
num_attention_heads: int = 32
|
| 54 |
-
rms_norm_eps: float = 1e-6
|
| 55 |
-
vocab_size: int = 151936
|
| 56 |
-
num_key_value_heads: int = 8
|
| 57 |
-
max_position_embeddings: int = 131072
|
| 58 |
-
linear_num_value_heads: int = 64
|
| 59 |
-
linear_num_key_heads: int = 16
|
| 60 |
-
linear_key_head_dim: int = 192
|
| 61 |
-
linear_value_head_dim: int = 128
|
| 62 |
-
linear_conv_kernel_dim: int = 4
|
| 63 |
-
tie_word_embeddings: bool = False
|
| 64 |
-
attention_bias: bool = False
|
| 65 |
-
head_dim: Optional[int] = None
|
| 66 |
-
full_attention_interval: int = 4
|
| 67 |
-
|
| 68 |
-
num_experts: int = 0
|
| 69 |
-
num_experts_per_tok: int = 0
|
| 70 |
-
decoder_sparse_step: int = 1
|
| 71 |
-
shared_expert_intermediate_size: int = 0
|
| 72 |
-
moe_intermediate_size: int = 0
|
| 73 |
-
norm_topk_prob: bool = True
|
| 74 |
-
|
| 75 |
-
rope_parameters: Optional[Dict[str, Union[float, str, bool, List[int]]]] = field(
|
| 76 |
-
default_factory=lambda: {
|
| 77 |
-
"type": "default",
|
| 78 |
-
"mrope_section": [11, 11, 10],
|
| 79 |
-
"rope_theta": 100000,
|
| 80 |
-
"partial_rotary_factor": 0.25,
|
| 81 |
-
}
|
| 82 |
-
)
|
| 83 |
-
|
| 84 |
-
partial_rotary_factor: float = 0.25
|
| 85 |
-
rope_theta: float = 100000.0
|
| 86 |
-
rope_scaling: Optional[Dict[str, Union[float, str]]] = None
|
| 87 |
-
|
| 88 |
-
def __post_init__(self):
|
| 89 |
-
if self.head_dim is None:
|
| 90 |
-
self.head_dim = self.hidden_size // self.num_attention_heads
|
| 91 |
-
if self.rope_parameters:
|
| 92 |
-
if "type" not in self.rope_parameters and "rope_type" in self.rope_parameters:
|
| 93 |
-
self.rope_parameters["type"] = self.rope_parameters.pop("rope_type")
|
| 94 |
-
self.partial_rotary_factor = self.rope_parameters.get("partial_rotary_factor", 0.25)
|
| 95 |
-
self.rope_theta = self.rope_parameters.get("rope_theta", 100000.0)
|
| 96 |
-
self.rope_scaling = self.rope_parameters
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
# =====================================================================
|
| 100 |
-
# Import remaining components from qwen3_5 (unchanged)
|
| 101 |
-
# =====================================================================
|
| 102 |
-
from mlx_lm.models.qwen3_5 import (
|
| 103 |
-
GatedDeltaNet,
|
| 104 |
-
DecoderLayer,
|
| 105 |
-
)
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
# =====================================================================
|
| 109 |
-
# Steered Text Model — the key change
|
| 110 |
-
# =====================================================================
|
| 111 |
-
|
| 112 |
-
class SteeredQwen3_5TextModel(nn.Module):
|
| 113 |
-
"""Qwen3_5TextModel with built-in CAA steering at layers 28-29."""
|
| 114 |
-
|
| 115 |
-
def __init__(self, args: TextModelArgs):
|
| 116 |
-
super().__init__()
|
| 117 |
-
self.embed_tokens = nn.Embedding(args.vocab_size, args.hidden_size)
|
| 118 |
-
self.layers = [
|
| 119 |
-
DecoderLayer(args=args, layer_idx=i)
|
| 120 |
-
for i in range(args.num_hidden_layers)
|
| 121 |
-
]
|
| 122 |
-
self.norm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
|
| 123 |
-
self.ssm_idx = 0
|
| 124 |
-
self.fa_idx = args.full_attention_interval - 1
|
| 125 |
-
|
| 126 |
-
# Steering direction — loaded from steering_direction.safetensors
|
| 127 |
-
# Shape: [1, 1, hidden_size] for broadcasting
|
| 128 |
-
self.steering_direction = mx.zeros((1, 1, args.hidden_size))
|
| 129 |
-
self.steering_alpha = STEERING_ALPHA
|
| 130 |
-
self.steering_layers = set(SURGERY_LAYERS)
|
| 131 |
-
|
| 132 |
-
# FP16 Surgery: load ablated o_proj weights and replace QuantizedLinear
|
| 133 |
-
self._surgery_loaded = False
|
| 134 |
-
|
| 135 |
-
def _load_surgery(self):
|
| 136 |
-
"""Load FP16 ablated o_proj weights and replace QuantizedLinear layers."""
|
| 137 |
-
if self._surgery_loaded:
|
| 138 |
-
return
|
| 139 |
-
import glob
|
| 140 |
-
# Find model directory from the embed_tokens weight location
|
| 141 |
-
# Surgery file is co-located with safetensors shards
|
| 142 |
-
model_dir = None
|
| 143 |
-
for p in ["/Volumes/EricsLLMDrive/Qwen3.5-397B-A17B-CRACK-V5-4bit",
|
| 144 |
-
str(Path(__file__).parent)]:
|
| 145 |
-
sf = os.path.join(p, "surgery/ablated_oproj.safetensors")
|
| 146 |
-
if os.path.exists(sf):
|
| 147 |
-
model_dir = p
|
| 148 |
-
break
|
| 149 |
-
if model_dir is None:
|
| 150 |
-
print("[CRACK] No surgery file found, using activation steering fallback")
|
| 151 |
-
self.steering_alpha = 4.1 # Fallback to steering
|
| 152 |
-
self._surgery_loaded = True
|
| 153 |
-
return
|
| 154 |
-
surgery_path = os.path.join(model_dir, "surgery/ablated_oproj.safetensors")
|
| 155 |
-
surgery_w = mx.load(surgery_path)
|
| 156 |
-
s_prefix = f"s{SURGERY_STRENGTH}"
|
| 157 |
-
for layer_idx in SURGERY_LAYERS:
|
| 158 |
-
key = f"{s_prefix}.layers.{layer_idx}.self_attn.o_proj.weight"
|
| 159 |
-
if key not in surgery_w:
|
| 160 |
-
print(f"[CRACK] WARNING: {key} not in surgery file")
|
| 161 |
-
continue
|
| 162 |
-
w_fp16 = surgery_w[key]
|
| 163 |
-
new_linear = nn.Linear(w_fp16.shape[1], w_fp16.shape[0], bias=False)
|
| 164 |
-
new_linear.weight = w_fp16
|
| 165 |
-
self.layers[layer_idx].self_attn.o_proj = new_linear
|
| 166 |
-
print(f"[CRACK] L{layer_idx} o_proj: Q4 -> FP16 surgery (s={SURGERY_STRENGTH})")
|
| 167 |
-
self._surgery_loaded = True
|
| 168 |
-
|
| 169 |
-
def __call__(
|
| 170 |
-
self,
|
| 171 |
-
inputs: mx.array,
|
| 172 |
-
cache: Optional[Any] = None,
|
| 173 |
-
input_embeddings: Optional[mx.array] = None,
|
| 174 |
-
) -> mx.array:
|
| 175 |
-
# Auto-load surgery on first call (after model weights are loaded)
|
| 176 |
-
if not self._surgery_loaded:
|
| 177 |
-
self._load_surgery()
|
| 178 |
-
|
| 179 |
-
if input_embeddings is not None:
|
| 180 |
-
hidden_states = input_embeddings
|
| 181 |
-
else:
|
| 182 |
-
hidden_states = self.embed_tokens(inputs)
|
| 183 |
-
|
| 184 |
-
if cache is None:
|
| 185 |
-
cache = [None] * len(self.layers)
|
| 186 |
-
|
| 187 |
-
fa_mask = create_attention_mask(hidden_states, cache[self.fa_idx])
|
| 188 |
-
ssm_mask = create_ssm_mask(hidden_states, cache[self.ssm_idx])
|
| 189 |
-
|
| 190 |
-
for i, (layer, c) in enumerate(zip(self.layers, cache)):
|
| 191 |
-
mask = ssm_mask if layer.is_linear else fa_mask
|
| 192 |
-
hidden_states = layer(hidden_states, mask=mask, cache=c)
|
| 193 |
-
|
| 194 |
-
# ============ CAA STEERING ============
|
| 195 |
-
# Apply contrastive activation steering after target layers.
|
| 196 |
-
# This is the proven 6/6 compliance operation:
|
| 197 |
-
# h_new = h - α * (v^T · h) * v
|
| 198 |
-
# Applied at every token, every generation step.
|
| 199 |
-
if i in self.steering_layers:
|
| 200 |
-
proj = mx.sum(
|
| 201 |
-
hidden_states * self.steering_direction,
|
| 202 |
-
axis=-1, keepdims=True
|
| 203 |
-
)
|
| 204 |
-
hidden_states = hidden_states - self.steering_alpha * proj * self.steering_direction
|
| 205 |
-
# ======================================
|
| 206 |
-
|
| 207 |
-
return self.norm(hidden_states)
|
| 208 |
-
|
| 209 |
-
|
| 210 |
-
class TextModel(nn.Module):
|
| 211 |
-
def __init__(self, args: TextModelArgs):
|
| 212 |
-
super().__init__()
|
| 213 |
-
self.args = args
|
| 214 |
-
self.model_type = args.model_type
|
| 215 |
-
self.model = SteeredQwen3_5TextModel(args)
|
| 216 |
-
if not args.tie_word_embeddings:
|
| 217 |
-
self.lm_head = nn.Linear(args.hidden_size, args.vocab_size, bias=False)
|
| 218 |
-
|
| 219 |
-
def __call__(
|
| 220 |
-
self,
|
| 221 |
-
inputs: mx.array,
|
| 222 |
-
cache: Optional[Any] = None,
|
| 223 |
-
input_embeddings: Optional[mx.array] = None,
|
| 224 |
-
) -> mx.array:
|
| 225 |
-
out = self.model(inputs, cache, input_embeddings=input_embeddings)
|
| 226 |
-
if self.args.tie_word_embeddings:
|
| 227 |
-
out = self.model.embed_tokens.as_linear(out)
|
| 228 |
-
else:
|
| 229 |
-
out = self.lm_head(out)
|
| 230 |
-
return out
|
| 231 |
-
|
| 232 |
-
@property
|
| 233 |
-
def layers(self):
|
| 234 |
-
return self.model.layers
|
| 235 |
-
|
| 236 |
-
@property
|
| 237 |
-
def head_dim(self):
|
| 238 |
-
return self.args.head_dim
|
| 239 |
-
|
| 240 |
-
def make_cache(self):
|
| 241 |
-
return [ArraysCache(size=2) if l.is_linear else KVCache() for l in self.layers]
|
| 242 |
-
|
| 243 |
-
def sanitize(self, weights):
|
| 244 |
-
# TextModel just passes through — sanitization done at Model level
|
| 245 |
-
return weights
|
| 246 |
-
|
| 247 |
-
|
| 248 |
-
# =====================================================================
|
| 249 |
-
# Public API (required by mlx_lm)
|
| 250 |
-
# =====================================================================
|
| 251 |
-
|
| 252 |
-
@dataclass
|
| 253 |
-
class ModelArgs(BaseModelArgs):
|
| 254 |
-
model_type: str = ""
|
| 255 |
-
text_config: dict = field(default_factory=dict)
|
| 256 |
-
|
| 257 |
-
@classmethod
|
| 258 |
-
def from_dict(cls, params):
|
| 259 |
-
if "text_config" not in params:
|
| 260 |
-
return cls(model_type=params["model_type"], text_config=params)
|
| 261 |
-
return super().from_dict(params)
|
| 262 |
-
|
| 263 |
-
|
| 264 |
-
class Model(nn.Module):
|
| 265 |
-
def __init__(self, config):
|
| 266 |
-
super().__init__()
|
| 267 |
-
if isinstance(config, ModelArgs):
|
| 268 |
-
text_config = config.text_config
|
| 269 |
-
else:
|
| 270 |
-
text_config = config.get("text_config", config)
|
| 271 |
-
|
| 272 |
-
self.language_model = TextModel(TextModelArgs.from_dict(text_config))
|
| 273 |
-
self.config = text_config
|
| 274 |
-
|
| 275 |
-
def __call__(
|
| 276 |
-
self,
|
| 277 |
-
inputs: mx.array,
|
| 278 |
-
cache: Optional[Any] = None,
|
| 279 |
-
input_embeddings: Optional[mx.array] = None,
|
| 280 |
-
**kwargs,
|
| 281 |
-
) -> mx.array:
|
| 282 |
-
return self.language_model(inputs, cache, input_embeddings=input_embeddings)
|
| 283 |
-
|
| 284 |
-
@property
|
| 285 |
-
def layers(self):
|
| 286 |
-
return self.language_model.model.layers
|
| 287 |
-
|
| 288 |
-
@property
|
| 289 |
-
def head_dim(self):
|
| 290 |
-
return self.language_model.head_dim
|
| 291 |
-
|
| 292 |
-
@property
|
| 293 |
-
def n_kv_heads(self):
|
| 294 |
-
return self.language_model.args.num_key_value_heads
|
| 295 |
-
|
| 296 |
-
def make_cache(self):
|
| 297 |
-
return self.language_model.make_cache()
|
| 298 |
-
|
| 299 |
-
def sanitize(self, weights):
|
| 300 |
-
weights = tree_unflatten(list(weights.items()))
|
| 301 |
-
weights = dict(tree_flatten(weights))
|
| 302 |
-
|
| 303 |
-
new_weights = {}
|
| 304 |
-
for key, value in weights.items():
|
| 305 |
-
# Handle steering direction
|
| 306 |
-
if "steering_direction" in key:
|
| 307 |
-
new_weights[key] = value
|
| 308 |
-
continue
|
| 309 |
-
if key.startswith("model.visual"):
|
| 310 |
-
continue
|
| 311 |
-
if key.startswith("model.language_model"):
|
| 312 |
-
key = key.replace("model.language_model", "language_model.model")
|
| 313 |
-
elif key.startswith("language_model."):
|
| 314 |
-
pass
|
| 315 |
-
else:
|
| 316 |
-
key = "language_model." + key
|
| 317 |
-
new_weights[key] = value
|
| 318 |
-
|
| 319 |
-
for l in range(self.language_model.args.num_hidden_layers):
|
| 320 |
-
prefix = f"language_model.model.layers.{l}.mlp"
|
| 321 |
-
gate_up_key = f"{prefix}.experts.gate_up_proj"
|
| 322 |
-
if gate_up_key in new_weights:
|
| 323 |
-
gate_up = new_weights.pop(gate_up_key)
|
| 324 |
-
mid = gate_up.shape[-2] // 2
|
| 325 |
-
new_weights[f"{prefix}.switch_mlp.gate_proj.weight"] = gate_up[..., :mid, :]
|
| 326 |
-
new_weights[f"{prefix}.switch_mlp.up_proj.weight"] = gate_up[..., mid:, :]
|
| 327 |
-
new_weights[f"{prefix}.switch_mlp.down_proj.weight"] = new_weights.pop(
|
| 328 |
-
f"{prefix}.experts.down_proj"
|
| 329 |
-
)
|
| 330 |
-
|
| 331 |
-
return new_weights
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00001-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:4fd9489caf88c267ab766c4b055f4fbccdd7bc9a02e437cd5b9c502abaafac8d
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00002-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e70782263c4cbfb6e1e53d77bf573e7a985744daaab60e1c9dc43dd020a3bbc2
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00003-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6b88c2bbc3548ac3a1351519e55507870e8acce4f72267e118a5c2d1ad523117
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00004-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:1f3410d449acd3b1f482f96400ef741d6e663b3a08dc8fa19356afa41f61116e
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00005-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:1dd6176990d66c6a6c3c18f0b8d29664f6123b26001022325c540963fe91ba18
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00006-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d6ccc0a038a141a87d4684b69b94a01be648ab44f9c8b9640d40f5e2583d1f10
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00007-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:cc5c7a1d60da308629de9b53171dd82041576ee1c853b4dfe13abf03a55f91a2
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00008-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:53f3f3d92b3973c32fe9c77d7cecef88e46bc6d9227df4ee8c6620e0d05fba6e
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00009-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:87001a73bf469a0c2ee22d0a82070529cb88761b6f21f83097d91b07b36e48b8
|
| 3 |
-
size 1873281888
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00010-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:a712c5c1aec49c4ed6eb14f1b2a638b67ef2b436407adbd5a2e5c07923df87ed
|
| 3 |
-
size 1873281890
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00011-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6180741be19531e04322b9fe2df90f4a3d626de968a9503c3522e9e9bb722134
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00012-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:3f7f060173f4365a5abc5e1938c994ffe6e178c8535b17cd0045f92c9a1e4949
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00013-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:8e3706807663f41abddb5d69d71039ec29462a164c19891b968336e180e9e2ee
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00014-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6065007821c89e6aa632ca6c1b4bdd3066bdf35f82def5ca041f3a56bf9b3563
|
| 3 |
-
size 1873281888
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00015-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:44d273f642424deeac5439c2d9667f0fde423d5c209880b9fde7377e55272790
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00016-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:4ea31d6b82d66a98240ca3993d942e1254dfaecf4870a28e4a9ced1886f769a0
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00017-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6b2565b899bb52946a4f27d398f77baf0e5e615be3beaf5e0853cd47ddccb7b0
|
| 3 |
-
size 1873281888
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00018-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:c1de4f1bd40d44a17cd4333c3e9af5341be93453d7079157ddf3acba32037043
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00019-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:2e2cf6eed761c74dd57ee7e22ed1cf0116439b4ba38b4392b7a52887aba5cc60
|
| 3 |
-
size 1873281882
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00020-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:012e08a7cde12ffeb64fcc847a0a7bb3f2cc1a5bfb1efe34e11f7070a1e01cc2
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00021-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:4193ccfcf4ce7598de33fa4086c82106016ff596075dc7a483249747db4a665d
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00022-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6fb364d58f5982be87b927474893dce4d9945e3ec10bf6d23c33abe3a747c923
|
| 3 |
-
size 1873281890
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00023-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:34f6bed657bbd8afd462e642061abd16c4387c0391c995ebcc9c359b8c1213be
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00024-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:938e72860c920fe7b0e2863c767b34379e3ef7c3ce0b537b231d31e801d2f5ac
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00025-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:845a2c39c5ebfd6e6675c719f593aa386502d243d014dc5375d4c5490968f8a9
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00026-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e5b214e6995508a6848b0b7cabe9b34acbe0778e7b39d70107505de444861023
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00027-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:07691a0946c0127bedfd0bf291c4d77351e5b177d00f279db47d1bfbf90f970b
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00028-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:ea012d4f1dcd54ccc890827e6c56fa7cebc1418f5f6629307dcc87c74366c57a
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00029-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:f6f7a8327e4ceebd546a43e80c07fad014107fbbc9509c904b85a5400e43d815
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00030-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:3529b734384af722552340df5d864a72a0a1b3ef5a3311704050da0eed6aaecb
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00031-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:856cc39b31617f34cf4c6b34658638e995ab8169377b911c57840c9f6603f00c
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00032-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:67f51aa413fd144cd87e7b0b62c3e2b7ddde987f6658aacbf73f87777fceea07
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00033-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:409e04ca2c0d4841347168d86fc4ff6d552cfa8de3f33eb52a8ab9608608c8f3
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00034-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:39646063b59b8a7d33b58771468f01449990bd84266111c829f9e24c19cbc814
|
| 3 |
-
size 1873281890
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00035-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:81d5d197efb505a786c8033dcdc51a4df903ce1000da9df49fca75a4ad9a5b22
|
| 3 |
-
size 1873281880
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00036-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:7da8b293e041bed07abf82f40d4e3ca3c0537f2569a540c8e6630be3097ab65f
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00037-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:eb5cb8c8d7b4d56b04fa5b38c7b31da7e7f1a283b003ce2d6f6ec1c2062e41bd
|
| 3 |
-
size 1873281890
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00038-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:879bfd1f58659d9a8786f59bbb25800f7d51e0ebf1676905acd6b5390b42880b
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00039-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:0a0944911e92352bf5cd50b5726dce9f25306216267fd87f7b630286dc75a314
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00040-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:fbaae2eb04be04de9a27505d6d2a72516c1cfab808b9c4355277bc8baa7fa055
|
| 3 |
-
size 1873281888
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00041-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:330e34d7097a2b871e2a482482e49da929c36ea96e15a2b4b768c3958f87d63e
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|
model.safetensors-00042-of-00094.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:73e4f5e5bc6d10c44e0058368553f8096c63073d8ef671f0f1621d71668c0b6a
|
| 3 |
-
size 1873281886
|
|
|
|
|
|
|
|
|
|
|
|