> that have to think about whether or not a file is ready, they naturally become an event-loop that constantly add things to a shared buffer, deals with the previous entries that completed, rinse, repeat.
This is the exception, not the rule, and it bugs me when APIs default to this. Most consumers of data are not looking at the stream of data, and in many cases where streaming is what I want, there are tools and APIs for handling that outside of my application logic. Much of the time I’m dealing with units of data only after the entire unit has arrived. Because if the message is not complete there is no forward progress to be made.
My tools should reflect that reality, not what’s quickest for the API writers to create.
In fact, if I remember my queuing theory properly, responsiveness is improved if the system prioritizes IO operations that can be finished (eg, EOS, EOF) over processing buffers for one that is still in the middle, which can’t happen with an event stream abstraction.
This is the exception, not the rule, and it bugs me when APIs default to this. Most consumers of data are not looking at the stream of data, and in many cases where streaming is what I want, there are tools and APIs for handling that outside of my application logic. Much of the time I’m dealing with units of data only after the entire unit has arrived. Because if the message is not complete there is no forward progress to be made.
My tools should reflect that reality, not what’s quickest for the API writers to create.
In fact, if I remember my queuing theory properly, responsiveness is improved if the system prioritizes IO operations that can be finished (eg, EOS, EOF) over processing buffers for one that is still in the middle, which can’t happen with an event stream abstraction.