πŸ€– Agentic AI

Alibaba Launches Qwen3-Coder and Qwen3.5-Omni: Most Agentic Open-Weight Code Model and 113-Language Multimodal Voice AI

β€’2 min readβ€’1 views

Alibaba Qwen team made a dual release on March 30, 2026 that significantly advances both AI coding agents and multimodal voice AI.

Qwen3-Coder: The Most Agentic Open Code Model

Qwen3-Coder ships in multiple sizes including Qwen3-Coder-480B-A35B-Instruct and Qwen3-Coder-Next (80B-A3B), using a novel hybrid attention MoE architecture. Trained on large-scale executable task synthesis, environment interaction, and reinforcement learning. Supports 358 programming languages with 256K native context (extendable to 1M via YARN). Works with Qwen Code, CLINE, Claude Code with specially designed function call format. Achieves results comparable to Claude Sonnet on agentic coding benchmarks while being open-weight.

Qwen3.5-Omni: Voice-First Multimodal AI

Processes text, images, audio, and video with real-time speech output. 113 languages for speech recognition (up from 19), 36 for generation (up from 10). Ships in Plus, Flash, and Light variants with Hybrid-Attention MoE. Features semantic interruption, voice cloning, and ARIA voice stability. Over 100M hours multimodal pretraining data. Benchmarked against Gemini 2.5 Pro and GPT-4o with 215 claimed SOTA results.

The Thinker-Talker architecture enables RAG, safety filters, and function calls between reasoning and speech synthesis. This dual release makes Qwen the most complete open-weight model family covering coding agents, multimodal reasoning, and voice - unmatched by DeepSeek, Mistral, or Meta Llama.

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo β†’