The reason is that I found that it better matches what you use them for in Celery,
since they are used e.g. to forward results from the previous tasks to a callback.
Often you have a signatures like: def blur_image(image, amount=1),
where the partial is used like render_image.s(file) | blur_image.s()
functools.partial is usually used the other way; it's most common
to create functools.partials where the first argument is satisfied.
Celery has it's quirks now and then (for reasons I can't nail down or reliably reproduce, enabling CELERYD_FORCE_EXECV breaks a lot of things), but I consider Ask and the rest of the #celery gang to be models for how to have a welcoming community around an open-source project.
Looking forwards to experimenting with the non-multiprocessing worker - something in my current setup regularly leaks memory, and for the above-mentioned reason I can't automatically recycle worker processes to clean it up.
It seems that the Eventlet pool does not work with the celery umbrella command. With celeryd it works ok. Is that true or do I have misconfigured something?
Brilliant, I have started to use celery a few months ago, now we a use it to back our video transcoding and automation platform, our search engine, and our mobile app backend.
one queue for every worker instance, yes! this is for broadcasting remote control commands, but I don't see how this poses a problem?
If you mean the one queue for every task behavior of the amqp result backend then that is also a yes.
Usually with replies in amqp you create one queue for every client, but Celery is often used in a web context
where the process that initiated the task may not be the same process that collects the reply, so this
is why it uses one queue for every task (it's also documented).
Several ways to accomplish that, but I would recommend using kombu in combination with celery to have the task manually send messages.
You could set the exchange type of the results exchange to be topic too, that way you would both have result queues and you could additionally bind a queue to the results exchange to get a copy of all the messages sent there. But if you don't need to listen for individual results then I'd rather just send messages manually. You have both connection and producer pools in Celery, so it's rather convenient to combine kombu with celery.
Preface this by saying that I've been successfully and happily using celery since 0.8, which was over 2 years ago. Since 2.0, it's been rock solid. But then I switched from RabbitMQ to redis and when I added redis slaves, it broke.
because its much easier to configure in my opinion, and i dont really care if messages get lost + i am using it in other parts of my stack so it i decided that instead of bringing in another piece of technology, i would utilize what i already had and keep things less complex