The unfortunate thing with their Qwen3-next naming is that it doesn't reflect on the fact that the architecture is completely different from Qwen3. Much more different than the difference between Qwen2 and Qwen3 even.
So support is likely to take quite some time because it's not just regular transformer blocks stacked on each other, but a brand new hybrid architecture using SSM.
> This is a massive task, likely 2-3 months of full-time work for a highly specialized engineer. Until the Qwen team contributes the implementation, there are no quick fixes.
[June 2025]
Yeah I've been using qwen3 on mlx in July already
Old post indeed: https://x.com/Alibaba_Qwen/status/1934517774635991412
The unfortunate thing with their Qwen3-next naming is that it doesn't reflect on the fact that the architecture is completely different from Qwen3. Much more different than the difference between Qwen2 and Qwen3 even.
So support is likely to take quite some time because it's not just regular transformer blocks stacked on each other, but a brand new hybrid architecture using SSM.
From https://github.com/ggml-org/llama.cpp/issues/15940#issuecomm...:
> This is a massive task, likely 2-3 months of full-time work for a highly specialized engineer. Until the Qwen team contributes the implementation, there are no quick fixes.
It's already supported in vLLM, SGLang and MLX.
The Qwen team made sure to land PRs to vLLM and SGLang on the first day, which is nice.