Bug fixes for EXAONE across TensorRT-LLM, SGLang, llama.cpp, DeepSpeed, and Transformers

#3
by Bias92 - opened

Hi EXAONE team,

I've been systematically auditing EXAONE model implementations across major inference frameworks and submitted fixes for bugs I found. Here's a summary:

1. NVIDIA/TensorRT-LLM #11862 βœ… Merged

  • File: exaone_moe_weight_mapper.py
  • Bug: preprocess_weights() iterates over weights.keys() (a view) while calling pop() and inserting new keys β€” bug-prone dict mutation pattern that can cause skipped/duplicate keys during MTP weight remapping
  • Fix: list(weights.keys())

2. DeepSpeed #7853 βœ… Merged

  • EXAONE 4.0 support for DeepSpeed

3. LGAI-EXAONE Discussion #10 βœ… Resolved

  • EXAONE 3.5 Transformers v5 compatibility fix (_tied_weights_keys type mismatch, DynamicCache API changes)

4. sgl-project/sglang #19789 (Under Review)

  • File: exaone4.py
  • Bug: In forward_split_prefill, aliased tensors are passed to fused_add_rmsnorm β€” EXAONE4 uses post-LN where hidden_states and residual point to the same tensor object, causing residual += hidden_states to produce 2*hidden_states (CUDA undefined behavior)
  • Fix: Use single-arg self.model.norm(hidden_states) to match the regular forward path

5. ggml-org/llama.cpp #20076 (Under Review)

  • File: tensor_mapping.py
  • Bug: Dead EXAONE3 FFN_DOWN mapping with incorrect prefix model.layers.h.{bid} (should be transformer.h.{bid})
  • Fix: Remove dead entry, add "exaone" tag to existing correct mapping

Happy to help with any other EXAONE-related issues across the inference ecosystem!

Sign up or log in to comment