Community-submitted content. Signal comes from upvotes, not editorial vetting. Always check the linked source.
Qwen3.5 35B-A3B replaced my 2-model agentic setup on M1 64GB
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like MiniMax-M2.5 (230B), DeepSeek…
Reddit LocalLLaMA··~2 min + comments
2-Minute Brief
Affects widely-used AI models.
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like…
I had to try them on a real-world agentic workflow.
Open receipts to verify and go deeper.
8-Minute Deep Dive
Context
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like MiniMax-M2.5 (230B), DeepSeek V3.2 (685B), and GLM-4.7 (357B) in reasoning, agentic, and coding tasks. I had to try them on a real-world agentic workflow. Here's what I found. Setup \- Device: Apple Silicon M1 Max, 64GB \- Inference: llama.cpp server (build 8179) \- Model: Qwen3.5-35B-A3B (Q4\_K\
For builders
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like…
Verify
Prefer primary announcements, papers, repos, and changelogs over reposts.
Qwen3.5 35B-A3B replaced my 2-model agentic setup on M1 64GB
TLDR
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like MiniMax-M2.5 (230B), DeepSeek…
2-Minute Brief
Affects widely-used AI models.
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like…
I had to try them on a real-world agentic workflow.
Open receipts to verify and go deeper.
8-Minute Deep Dive
Context
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like MiniMax-M2.5 (230B), DeepSeek V3.2 (685B), and GLM-4.7 (357B) in reasoning, agentic, and coding tasks. I had to try them on a real-world agentic workflow. Here's what I found. Setup \- Device: Apple Silicon M1 Max, 64GB \- Inference: llama.cpp server (build 8179) \- Model: Qwen3.5-35B-A3B (Q4\_K\
For builders
There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like…
Verify
Prefer primary announcements, papers, repos, and changelogs over reposts.