← Back to Feed
Industry News llm benchmarks agents cost

A developer built a March Madness bracket prediction eval across top LLMs, revealing massive cost disparities—Claude mod

A developer built a March Madness bracket prediction eval across top LLMs, revealing massive cost disparities—Claude models spent $40+ vs $0.39 for MiMo-V2-Flash—while most models stuck close to chalk picks.
Show HN: LLMadness – March Madness Model Evals I wanted to play around with the non-coding agentic capabilities of the top LLMs so I built a model eval predicting the March Madness bracket.

After playing around a bit with the format, I went with the following setup:

- 63 single-game predictions v. full one-shot bracket

- Maxed out at 10 tool calls per game

- Upset-specific instruction in the system prompt

- Exponential scoring by round (1, 2, 4, 8, 16, 32)

There were some interesting learnings:

- Unsurprisingly, most brackets are close to chalk. Very few significant upsets were predicted.

- There was a HUGE cost and token disparity with the exact same setup and constraints. Both Claude models spent over $40 to fill in the bracket while MiMo-V2-Flash spent $0.39. I spent a total of $138.69 on all 15 model runs.

- There was also a big disparity in speed. Claude Opus 4.6 took almost 2 full days to finish the 2 play-ins and 63 bracket games. Qwen 3.5 Flash took under 10 minutes.

- Even when given the tournament year (2026), multiple models pulled in information from previous years. Claude seemed to be the biggest offender, really wanting Cooper Flagg to be on this year's Duke team.

This was a really fun way to combine two of my interests and I'm excited to see how the models perform over the coming weeks. You can click into each bracket node to see the full model trace and rationale behind the picks.

The stack is Typescript, Next.js, React, and raw CSS. No DB, everything stored in static JSON

View Original Post ↗