Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance Paper • 2509.02055 • Published Sep 2, 2025 • 1
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach Paper • 2512.02834 • Published Dec 2, 2025 • 40
VLA-TTS: TACO Collection Models in "Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach". Credits to Rhodes Team @ TeleAI. • 4 items • Updated about 1 month ago • 2
Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration Paper • 2405.14314 • Published May 23, 2024 • 1
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics Paper • 2506.01844 • Published Jun 2, 2025 • 147
Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling Paper • 2502.06703 • Published Feb 10, 2025 • 153