FP8 Model Conversion
- Source:
https://huggingface.co/Kijai/WanVideo_comfy/OneToAllAnimation - Original File(s):
Wan21-OneToAllAnimation_1_3B_v2_fp16.safetensors - Original Format:
safetensors - FP8 Format:
E5M2 - FP8 File:
Wan21-OneToAllAnimation_1_3B_v2_fp16-fp8-e5m2.safetensors
Usage
from safetensors.torch import load_file
import torch
# Load FP8 model
fp8_state = load_file("Wan21-OneToAllAnimation_1_3B_v2_fp16-fp8-e5m2.safetensors")
# Convert tensors back to float32 for computation (auto-converted by PyTorch)
model.load_state_dict(fp8_state)
Note: FP8 tensors are automatically converted to float32 when loaded in PyTorch. Requires PyTorch โฅ 2.1 for FP8 support.
Statistics
- Total tensors: 1329
- Converted to FP8: 1329
- Skipped (non-float): 0
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support