it reminds me of the thinking-fast vs. thinking-slow dichotomy. Current llms are the thinking fast type. Funnily people’s complains about its errors are reminiscent of this. It answers just to quick and only with its instant response neural net. A thinking slow answer would be more akin to a chain of thought answer. Allowing the llm a more flexible platform than CoT promptin might well be the next step. Of course it would als multiply compute cost. So it might not be in your 20$ subscription
it reminds me of the thinking-fast vs. thinking-slow dichotomy. Current llms are the thinking fast type. Funnily people’s complains about its errors are reminiscent of this. It answers just to quick and only with its instant response neural net. A thinking slow answer would be more akin to a chain of thought answer. Allowing the llm a more flexible platform than CoT promptin might well be the next step. Of course it would als multiply compute cost. So it might not be in your 20$ subscription