\

Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7

153 points - today at 5:37 PM

Source
  • ericpauley

    today at 6:40 PM

    Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.

    I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.

      • tecoholic

        today at 9:01 PM

        Even the first one - Qwen added extra details in the background sure. But he Pelican itself is a stork with a bent beak and it's feet is cut off it's legs. While impressive for a local model, I don't think it's a winner.

        • wongarsu

          today at 7:52 PM

          Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version

          But in terms of making something physically plausible, Opus certainly got a lot closer

            • kmacdough

              today at 8:07 PM

              Given adherence is a more significant practical barrier, it's probably the better signal. That is, if we decide too look for signal here.

      • mentalgear

        today at 7:18 PM

        I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.

          • simonw

            today at 7:35 PM

            That's why I did the flamingo on a unicycle.

            For a delightful moment this morning I thought I might have finally caught a model provider cheating by training for the pelican, but the flamingo convinced me that wasn't the case.

              • akavel

                today at 8:11 PM

                r/LocalLlama is now doing a horse in a racing car:

                https://redd.it/1slz38i

                • furyofantares

                  today at 8:07 PM

                  It is completely wild to me that you prefer Qwen's flamingo. I think it's really bad and Opus' is pretty good.

                    • simonw

                      today at 8:09 PM

                      The Opus one doesn't even have a bowtie.

                        • furyofantares

                          today at 8:40 PM

                          The Opus one looks like a flamingo, and looks like it's riding the unicycle. Sitting on the seat. Feet on the pedals.

                          The Qwen one looks like a 3-tailed, broken-winged, beakless (I guess? Is that offset white thing a beak? Or is it chewing on a pelican feather like it's a piece of straw?) monstrosity not sitting on the seat, with its one foot off the pedal (the other chopped off at the knee) of a malmanufactured wheel that has bonus spokes that are longer than the wheel.

                          But yeah, it does have a bowtie and sunglasses that you didn't ask for! Plus it says "<3 Flamingo on a Unicycle <3", which perhaps resolves all ambiguity.

                  • prodigycorp

                    today at 7:47 PM

                    To me the opus flamingo is waaaay better than the qwen one. qwen has the better pelican, though.

                    • dude250711

                      today at 7:50 PM

                      Is a flamingo on a unicycle not merely a special case of a pelican on a bicycle?

              • bottlepalm

                today at 8:59 PM

                I really wish they spent some time training for computer use. This model is incapable of finding anywhere near the correct x,y coordinate of a simple object in a picture.

                • jbellis

                  today at 7:35 PM

                  For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).

                    • __natty__

                      today at 8:09 PM

                      You compare tiny modal for local inference vs propertiary, expensive frontier model. It would be more fair to compare against similar priced model or tiny frontier models like haiku, flash or gpt nano.

                        • javawizard

                          today at 8:21 PM

                          Not when the article they're commenting on was doing literally exactly the same thing.

                          • ericd

                            today at 8:11 PM

                            Eh it’s important perspective, lest someone start thinking they can drop $5k on a laptop and be free of Anthropic/OpenAI. Expensive lesson.

                        • today at 8:10 PM

                      • wood_spirit

                        today at 8:45 PM

                        Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.

                        • VHRanger

                          today at 7:52 PM

                          That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing

                          • sailingcode

                            today at 8:44 PM

                            I'm an iguana and need to wash my bicycle in the carwash. Shall I walk or take the bus?

                              • layer8

                                today at 8:55 PM

                                You should have the pelican ride it to the carwash and wash it for you.

                                • DANmode

                                  today at 8:46 PM

                                  That’s a long walk! You should reserve a ride with $PartnerRideshareCo.

                              • comandillos

                                today at 6:50 PM

                                I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.

                                  • iib

                                    today at 7:11 PM

                                    This is about the newly release Qwen3.6. Just wanted to make sure you got that correctly.

                                • aliljet

                                  today at 7:52 PM

                                  I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?

                                    • smashed

                                      today at 8:00 PM

                                      OpenCode?

                                  • throwuxiytayq

                                    today at 8:40 PM

                                    I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.

                                      • sharkjacobs

                                        today at 8:44 PM

                                        It feels like the results stopped being interesting a little while ago but the practice has become part of simonw's brand, and it gives him something to post even when there is nothing interesting to say about another incremental improvement to a model, and so I don't imagine he'll stop.

                                    • JaggerFoo

                                      today at 8:48 PM

                                      FYI, using a 128GB M5 MacBook Pro, sourced from another article by the author.

                                      • lofaszvanitt

                                        today at 8:09 PM

                                        That Qwen flamingo on the unicycle is actually quite good. A work of art.

                                        • jedisct1

                                          today at 8:29 PM

                                          I'm currently testing Qwen3.6-35B-A3B with https://swival.dev for security reviews.

                                          It's pretty good at finding bugs, but not so good at writing patches to fix them.

                                          • 19qUq

                                            today at 7:40 PM

                                            How about switching to MechaStalin on a tricycle? It gets kind of boring.

                                              • mvanbaak

                                                today at 8:03 PM

                                                boring ... the ways all the models fail at a simple task never gets boring to me