我收到了Celery任务,但不会执行。我正在使用Python 2.7和Celery 4.0.2。我的消息代理是Amazon SQS。
这是芹菜工人的输出:
$ celery worker -A myapp.celeryapp --loglevel = INFO [任务] 。 myapp.tasks.trigger_build [2017-01-12 23:34:25,206:INFO / MainProcess]已连接到sqs:// 13245:** @ localhost // [2017- 01-12 23:34:25,391:INFO / MainProcess] celery @ ip-111-11-11-11准备好了。 [2017-01-12 23:34:27,700:INFO / MainProcess]收到的任务:myapp.tasks.trigger_build [b248771c-6dd5-469d-bc53-eaf63c4f6b60]我尝试在运行芹菜工人$时添加 -Ofair c $ c>,但没有帮助。其他一些信息可能会有所帮助:
- Celery始终接收8个任务,尽管大约有100条消息等待提取。 li>
- 大约每4或5次任务实际上会运行并完成一次,但随后又卡住了。
- 这是 ps aux 的结果。请注意,它在3个不同的进程中运行celery(不确定原因),其中一个具有99.6%的CPU利用率,即使它没有完成任何任务或其他任何事情。
过程:
$ ps aux | grep celery 没人7034 99.6 1.8 382688 74048? R 05:22 18:19 python2.7 celery worker -A myapp.celeryapp --loglevel = INFO 没人7039 0.0 1.3 246672 55664吗? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel = INFO 没人7040 0.0 1.3 246672 55632? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel = INFO设置:
CELERY_BROKER_URL ='sqs://%s:%s @'%(AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY.replace(' /','%2F')) CELERY_BROKER_TRANSPORT ='sqs' CELERY_BROKER_TRANSPORT_OPTIONS = {'region':'us-east-1','visibility_timeout':60 * 30,'polling_interval':0.3,'queue_name_prefix':'myapp-',} CELERY_BROKER_HEARTBEAT = 0 CELERY_BROKER_POOL_LIMIT = 1 CELERY_BROKER_CONNECTION_TIME = 10 CELERY_DEFAULT_QUEUE ='myapp' CELERY_QUEUES =( Queue('myapp',Exchange('default'),routing_key ='default'), ) CELERY_ALWAYS_EAGER =假 CELERY_ACKS_LATE =真 CELERY_TASK_PUBLISH_RETRY =真 CELERY_DISABLE_RATE_LIMITS =假 CELERY_IGNORE_RESEL =真假 CELERY_TASK_RESULT_EXPIRES = 600 CELERY_RESULT_BACKEND ='django-db' CELERY_TIMEZONE = TIME_ZONE CELERY_TASK_SERIALIZER ='json' CELERY_ACCEPT_CONTENT = ['application / json'] CELERYD_PID_FILE = /var/celery_%N.pid CELERYD_HIJACK_ROOT_LOGGER = False CELERYD_PREFETCH_MULTIPLIER = 1 CELERYD_MAX_TASKS_PER_CHILD = 1000 $ p>报告:
$ celery报告-A myapp.celeryapp 软件-> celery:4.0.2(latentcall)kombu:4.0.2 py:2.7.12 台球:3.5.0.2 sqs:N / A 平台->系统:Linux arch:64位,ELF imp:CPython loader-> celery.loaders.app.AppLoader 设置-> transport:sqs结果:django-db解决方案
我也是遇到同样的问题。经过一番搜索之后,我发现了在Celery worker命令行中添加-without-gossip --without-mingle --without-heartbeat -Ofair 的解决方案。因此,在您的情况下,您的worker命令应该是 celery worker -A myapp.celeryapp --loglevel = INFO --without-gossip --without-mingle --without-heartbeat -Ofair
I have Celery tasks that are received but will not execute. I am using Python 2.7 and Celery 4.0.2. My message broker is Amazon SQS.
This the output of celery worker:
$ celery worker -A myapp.celeryapp --loglevel=INFO [tasks] . myapp.tasks.trigger_build [2017-01-12 23:34:25,206: INFO/MainProcess] Connected to sqs://13245:**@localhost// [2017-01-12 23:34:25,391: INFO/MainProcess] celery@ip-111-11-11-11 ready. [2017-01-12 23:34:27,700: INFO/MainProcess] Received task: myapp.tasks.trigger_build[b248771c-6dd5-469d-bc53-eaf63c4f6b60]I have tried adding -Ofair when running celery worker but that did not help. Some other info that might be helpful:
- Celery always receives 8 tasks, although there are about 100 messages waiting to be picked up.
- About once in every 4 or 5 times a task actually will run and complete, but then it gets stuck again.
- This is the result of ps aux. Notice that it is running celery in 3 different processes (not sure why) and one of them has 99.6% CPU utilization, even though it's not completing any tasks or anything.
Processes:
$ ps aux | grep celery nobody 7034 99.6 1.8 382688 74048 ? R 05:22 18:19 python2.7 celery worker -A myapp.celeryapp --loglevel=INFO nobody 7039 0.0 1.3 246672 55664 ? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel=INFO nobody 7040 0.0 1.3 246672 55632 ? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel=INFOSettings:
CELERY_BROKER_URL = 'sqs://%s:%s@' % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.replace('/', '%2F')) CELERY_BROKER_TRANSPORT = 'sqs' CELERY_BROKER_TRANSPORT_OPTIONS = { 'region': 'us-east-1', 'visibility_timeout': 60 * 30, 'polling_interval': 0.3, 'queue_name_prefix': 'myapp-', } CELERY_BROKER_HEARTBEAT = 0 CELERY_BROKER_POOL_LIMIT = 1 CELERY_BROKER_CONNECTION_TIMEOUT = 10 CELERY_DEFAULT_QUEUE = 'myapp' CELERY_QUEUES = ( Queue('myapp', Exchange('default'), routing_key='default'), ) CELERY_ALWAYS_EAGER = False CELERY_ACKS_LATE = True CELERY_TASK_PUBLISH_RETRY = True CELERY_DISABLE_RATE_LIMITS = False CELERY_IGNORE_RESULT = True CELERY_SEND_TASK_ERROR_EMAILS = False CELERY_TASK_RESULT_EXPIRES = 600 CELERY_RESULT_BACKEND = 'django-db' CELERY_TIMEZONE = TIME_ZONE CELERY_TASK_SERIALIZER = 'json' CELERY_ACCEPT_CONTENT = ['application/json'] CELERYD_PID_FILE = "/var/celery_%N.pid" CELERYD_HIJACK_ROOT_LOGGER = False CELERYD_PREFETCH_MULTIPLIER = 1 CELERYD_MAX_TASKS_PER_CHILD = 1000Report:
$ celery report -A myapp.celeryapp software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.12 billiard:3.5.0.2 sqs:N/A platform -> system:Linux arch:64bit, ELF imp:CPython loader -> celery.loaders.app.AppLoader settings -> transport:sqs results:django-db解决方案
I was also getting same issue. After little bit for searching i found solution to add --without-gossip --without-mingle --without-heartbeat -Ofair to the Celery worker command line. So in your case your worker command should be celery worker -A myapp.celeryapp --loglevel=INFO --without-gossip --without-mingle --without-heartbeat -Ofair
更多推荐
Celery任务已接收但未执行
发布评论