Bug fixes for EXAONE across TensorRT-LLM, SGLang, llama.cpp, DeepSpeed, and Transformers
#3
by Bias92 - opened
Hi EXAONE team,
I've been systematically auditing EXAONE model implementations across major inference frameworks and submitted fixes for bugs I found. Here's a summary:
1. NVIDIA/TensorRT-LLM #11862 β Merged
- File:
exaone_moe_weight_mapper.py - Bug:
preprocess_weights()iterates overweights.keys()(a view) while callingpop()and inserting new keys β bug-prone dict mutation pattern that can cause skipped/duplicate keys during MTP weight remapping - Fix:
list(weights.keys())
2. DeepSpeed #7853 β Merged
- EXAONE 4.0 support for DeepSpeed
3. LGAI-EXAONE Discussion #10 β Resolved
- EXAONE 3.5 Transformers v5 compatibility fix (
_tied_weights_keystype mismatch,DynamicCacheAPI changes)
4. sgl-project/sglang #19789 (Under Review)
- File:
exaone4.py - Bug: In
forward_split_prefill, aliased tensors are passed tofused_add_rmsnormβ EXAONE4 uses post-LN wherehidden_statesandresidualpoint to the same tensor object, causingresidual += hidden_statesto produce2*hidden_states(CUDA undefined behavior) - Fix: Use single-arg
self.model.norm(hidden_states)to match the regular forward path
5. ggml-org/llama.cpp #20076 (Under Review)
- File:
tensor_mapping.py - Bug: Dead EXAONE3 FFN_DOWN mapping with incorrect prefix
model.layers.h.{bid}(should betransformer.h.{bid}) - Fix: Remove dead entry, add "exaone" tag to existing correct mapping
Happy to help with any other EXAONE-related issues across the inference ecosystem!