What's more, they show a counter-intuitive scaling Restrict: their reasoning energy improves with issue complexity as many as a point, then declines Even with getting an ample token budget. By comparing LRMs with their conventional LLM counterparts beneath equivalent inference compute, we determine 3 performance regimes: (1) very low-complexity https://www.youtube.com/watch?v=snr3is5MTiU