Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Celery 3.0 has been released (celeryproject.org)
136 points by KenCochrane on July 7, 2012 | hide | past | favorite | 33 comments


Just wanted to say thanks to Ask for Celery. Absolutely fantastic for distributed systems.


I like the Autechre album based version naming convention


I find this[1] example in Getting Started/Next Steps to be backwards:

    # incomplete partial:  add(?, 2)
    >>> s2 = add.s(2)
    # resolves the partial: add(8, 2)
    >>> res = s2.delay(8)
    >>> res.get()
    10
Shouldn't it behave like functools.partial[2]?

    >>> import functools
    >>> abc = lambda a, b, c: (a, b, c)
    >>> bc = functools.partial(abc, "a")
    >>> bc("b", "c")
    ('a', 'b', 'c')
1. http://docs.celeryproject.org/en/latest/getting-started/next...

2. http://docs.python.org/library/functools.html#functools.part...


Very good question!

The reason is that I found that it better matches what you use them for in Celery, since they are used e.g. to forward results from the previous tasks to a callback. Often you have a signatures like: def blur_image(image, amount=1), where the partial is used like render_image.s(file) | blur_image.s()

functools.partial is usually used the other way; it's most common to create functools.partials where the first argument is satisfied.


Celery is one of those rare libraries where improvements to itself are improvements to my sanity and well being.

That said, I did run into an issue with periodic tasks (our bread and butter at Zapier). Detailed here: https://github.com/celery/celery/issues/844

Thanks a bunch Ask.


Celery has it's quirks now and then (for reasons I can't nail down or reliably reproduce, enabling CELERYD_FORCE_EXECV breaks a lot of things), but I consider Ask and the rest of the #celery gang to be models for how to have a welcoming community around an open-source project.

Looking forwards to experimenting with the non-multiprocessing worker - something in my current setup regularly leaks memory, and for the above-mentioned reason I can't automatically recycle worker processes to clean it up.


It seems that the Eventlet pool does not work with the celery umbrella command. With celeryd it works ok. Is that true or do I have misconfigured something?


could be a bug, could you please open an issue at http://github.com/celery/celery/issues?


> Over 600 commits, 30k additions/36k deletions.

The large number of deletions is a pretty good sign of the project's health. Congrats!


Brilliant, I have started to use celery a few months ago, now we a use it to back our video transcoding and automation platform, our search engine, and our mobile app backend.

Excellent job celery team !


Does Celery still create a new queue for every worker on rabbitmq? This bizarre behavior makes it hard to use Celery with amqp :(


one queue for every worker instance, yes! this is for broadcasting remote control commands, but I don't see how this poses a problem?

If you mean the one queue for every task behavior of the amqp result backend then that is also a yes. Usually with replies in amqp you create one queue for every client, but Celery is often used in a web context where the process that initiated the task may not be the same process that collects the reply, so this is why it uses one queue for every task (it's also documented).

There's an experimental result backend for RPC-style replies, that uses transient messages and one queue per client: http://github.com/celery/celery/tree/kombuRPC


That is a problem for me, because I want all events of one type running through one AMQP queue to monitor throughput, etc.


Several ways to accomplish that, but I would recommend using kombu in combination with celery to have the task manually send messages.

You could set the exchange type of the results exchange to be topic too, that way you would both have result queues and you could additionally bind a queue to the results exchange to get a copy of all the messages sent there. But if you don't need to listen for individual results then I'd rather just send messages manually. You have both connection and producer pools in Celery, so it's rather convenient to combine kombu with celery.


Finally I could use officially supported 'Canvas' to design workflows, before that I had to use the celery-tasktree package.


Can somebody compare celery to beanstalk?


Celery supports beanstalk among many other backends (RabbitMQ, Redis, MongoDB, ..., etc)

So you can use Celery as a driver for beanstalk in Python.


But it seems to me that Beanstalk and Celery accomplish the same thing. Am I mistaken?


im using redis instead of rabbitmq - have had zero problems so far. great work, this project is awesome


Are you using redis with slaves? When I do that, celery throws errors when trying to add tasks to my redis queue.


what's the error and traceback? (paste at pastie.org or similar)


Preface this by saying that I've been successfully and happily using celery since 0.8, which was over 2 years ago. Since 2.0, it's been rock solid. But then I switched from RabbitMQ to redis and when I added redis slaves, it broke.

Here goes: http://pastie.org/4216467

It's probably due to redis' INFO command giving different values. Now would be a great time to get it sorted out, too.


About your error, seems strange that a simple info command gives that error, maybe you're running an outdated version of redis-py/redis-server?


Latest redis (2.4.15) and using redis-py 2.4.10. Just checked the CHANGELOG for redis-py, which is now at 2.4.13, and voila:

    * 2.4.11
        * Made the INFO command more tolerant of Redis changes formatting. Fix for #217.
Doh! Off by a single patch release. Thanks for the help.

See, folks? Celery is amazing in multiple ways.


no i am not, sorry


Out of curiosity, why do you use Redis instead of RabbitMQ?


because its much easier to configure in my opinion, and i dont really care if messages get lost + i am using it in other parts of my stack so it i decided that instead of bringing in another piece of technology, i would utilize what i already had and keep things less complex


looks like redis is getting improved support in this release. Anyone using redis instead of rabbitmq?


I do for Errormator task queue, since 2.5, no issues at all.


Is it compatible with Hummus 2.0?


Only peanut butter 1.0, and there may be legacy support for ranch3, but you'll have to check the docs.


You say that now, but tomorrow Hummus.py 0.5 will be announced.


chain is amazing.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: