What's most annoying about Gemini 2.5 is that it is obnoxiously verbose compared to Opus 4. Both in explaining the code it wrote and the amount of lines it writes and comments it adds, to the point where the output is often 2-3x more than Opus 4.
You can obviously alleviate this by asking it to be more concise but even then it bleeds through sometimes.
Yes this is what I mean by conciseness with o3. If prompted well it can produce extremely high level quality code that blows me away at times. I've also had several instances now where I gave it slightly wrong context and other models just butchered a solution with dozens of lines for the proposed fix which I could tell wasn't right and then after reverting and asking o3, it immediately went searching for another file I hadn't included and fixed it in one line. That kind of, dare I say independent thinking, is worth a lot when dealing with complex codebases.
Personally I still am of the opinion current LLMs are more of a very advanced autocomplete.
I have to think of the guy posting that he fed his entire project codebase to an AI, it refactored everything, modularizing it but still reducing the file count from 20 to 12. "It was glorious to see. Nothing worked of course, but glorious nonetheless".
In the future I can certainly see it get better and better, especially because code is a hard science that reduces down to control flow logic which reduces down to math. It's a much more narrow problem space than, say, poetry or visuals.
You can obviously alleviate this by asking it to be more concise but even then it bleeds through sometimes.