Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article addresses this:

> This made sense when agents were unreliable. You’d never let GPT-3 decide how to decompose a project. But current models are good at planning. They break problems into subproblems naturally. They understand dependencies. They know when a task is too big for one pass.

> So why are we still hardcoding the decomposition?



Sure, decomposition is already in the pre-training corpus. and then we can do some "instruction-tuning" on top. This is fine for the last mile, but that's it. I would consider this unaddressed and after with the root comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: