LLM Research
updated
Is Multilingual LLM Watermarking Truly Multilingual? A Simple
Back-Translation Solution
Paper
• 2510.18019
• Published • 18
PORTool: Tool-Use LLM Training with Rewarded Tree
Paper
• 2510.26020
• Published • 5
POWSM: A Phonetic Open Whisper-Style Speech Foundation Model
Paper
• 2510.24992
• Published • 4
Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal
Perception and Generation
Paper
• 2510.24821
• Published • 41
Generalization or Memorization: Dynamic Decoding for Mode Steering
Paper
• 2510.22099
• Published • 4
Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with
Free-Form Preferences
Paper
• 2510.23451
• Published • 28
ARC-Encoder: learning compressed text representations for large language
models
Paper
• 2510.20535
• Published • 8
Continuous Autoregressive Language Models
Paper
• 2510.27688
• Published • 74
Can Visual Input Be Compressed? A Visual Token Compression Benchmark for
Large Multimodal Models
Paper
• 2511.02650
• Published • 10
RADLADS: Rapid Attention Distillation to Linear Attention Decoders at
Scale
Paper
• 2505.03005
• Published • 36
Titans: Learning to Memorize at Test Time
Paper
• 2501.00663
• Published • 31
DualPath: Breaking the Storage Bandwidth Bottleneck in Agentic LLM Inference
Paper
• 2602.21548
• Published • 50
In-Context Reinforcement Learning for Tool Use in Large Language Models
Paper
• 2603.08068
• Published • 43
OpenClaw-RL: Train Any Agent Simply by Talking
Paper
• 2603.10165
• Published • 151
Pushing the Frontier of Audiovisual Perception with Large-Scale Multimodal Correspondence Learning
Paper
• 2512.19687
• Published • 3
GradMem: Learning to Write Context into Memory with Test-Time Gradient Descent
Paper
• 2603.13875
• Published • 35