Celery任务已接收但未执行

编程入门 行业动态 更新时间:2024-10-28 18:26:27
本文介绍了Celery任务已接收但未执行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我收到了Celery任务,但不会执行。我正在使用Python 2.7和Celery 4.0.2。我的消息代理是Amazon SQS。

这是芹菜工人的输出:

$ celery worker -A myapp.celeryapp --loglevel = INFO [任务] 。 myapp.tasks.trigger_build [2017-01-12 23:34:25,206:INFO / MainProcess]已连接到sqs:// 13245:** @ localhost // [2017- 01-12 23:34:25,391:INFO / MainProcess] celery @ ip-111-11-11-11准备好了。 [2017-01-12 23:34:27,700:INFO / MainProcess]收到的任务:myapp.tasks.trigger_build [b248771c-6dd5-469d-bc53-eaf63c4f6b60]

我尝试在运行芹菜工人,但没有帮助。其他一些信息可能会有所帮助:

  • Celery始终接收8个任务,尽管大约有100条消息等待提取。

    li>
  • 大约每4或5次任务实际上会运行并完成一次,但随后又卡住了。
  • 这是 ps aux 的结果。请注意,它在3个不同的进程中运行celery(不确定原因),其中一个具有99.6%的CPU利用率,即使它没有完成任何任务或其他任何事情。

过程:

$ ps aux | grep celery 没人7034 99.6 1.8 382688 74048? R 05:22 18:19 python2.7 celery worker -A myapp.celeryapp --loglevel = INFO 没人7039 0.0 1.3 246672 55664吗? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel = INFO 没人7040 0.0 1.3 246672 55632? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel = INFO

设置:

CELERY_BROKER_URL ='sqs://%s:%s @'%(AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY.replace(' /','%2F')) CELERY_BROKER_TRANSPORT ='sqs' CELERY_BROKER_TRANSPORT_OPTIONS = {'region':'us-east-1','visibility_timeout':60 * 30,'polling_interval':0.3,'queue_name_prefix':'myapp-',} CELERY_BROKER_HEARTBEAT = 0 CELERY_BROKER_POOL_LIMIT = 1 CELERY_BROKER_CONNECTION_TIME = 10 CELERY_DEFAULT_QUEUE ='myapp' CELERY_QUEUES =( Queue('myapp',Exchange('default'),routing_key ='default'), ) CELERY_ALWAYS_EAGER =假 CELERY_ACKS_LATE =真 CELERY_TASK_PUBLISH_RETRY =真 CELERY_DISABLE_RATE_LIMITS =假 CELERY_IGNORE_RESEL =真假 CELERY_TASK_RESULT_EXPIRES = 600 CELERY_RESULT_BACKEND ='django-db' CELERY_TIMEZONE = TIME_ZONE CELERY_TASK_SERIALIZER ='json' CELERY_ACCEPT_CONTENT = ['application / json'] CELERYD_PID_FILE = /var/celery_%N.pid CELERYD_HIJACK_ROOT_LOGGER = False CELERYD_PREFETCH_MULTIPLIER = 1 CELERYD_MAX_TASKS_PER_CHILD = 1000 $ p>

报告:

$ celery报告-A myapp.celeryapp 软件-> celery:4.0.2(latentcall)kombu:4.0.2 py:2.7.12 台球:3.5.0.2 sqs:N / A 平台->系统:Linux arch:64位,ELF imp:CPython loader-> celery.loaders.app.AppLoader 设置-> transport:sqs结果:django-db

解决方案

我也是遇到同样的问题。经过一番搜索之后,我发现了在Celery worker命令行中添加-without-gossip --without-mingle --without-heartbeat -Ofair 的解决方案。因此,在您的情况下,您的worker命令应该是 celery worker -A myapp.celeryapp --loglevel = INFO --without-gossip --without-mingle --without-heartbeat -Ofair

I have Celery tasks that are received but will not execute. I am using Python 2.7 and Celery 4.0.2. My message broker is Amazon SQS.

This the output of celery worker:

$ celery worker -A myapp.celeryapp --loglevel=INFO [tasks] . myapp.tasks.trigger_build [2017-01-12 23:34:25,206: INFO/MainProcess] Connected to sqs://13245:**@localhost// [2017-01-12 23:34:25,391: INFO/MainProcess] celery@ip-111-11-11-11 ready. [2017-01-12 23:34:27,700: INFO/MainProcess] Received task: myapp.tasks.trigger_build[b248771c-6dd5-469d-bc53-eaf63c4f6b60]

I have tried adding -Ofair when running celery worker but that did not help. Some other info that might be helpful:

  • Celery always receives 8 tasks, although there are about 100 messages waiting to be picked up.
  • About once in every 4 or 5 times a task actually will run and complete, but then it gets stuck again.
  • This is the result of ps aux. Notice that it is running celery in 3 different processes (not sure why) and one of them has 99.6% CPU utilization, even though it's not completing any tasks or anything.

Processes:

$ ps aux | grep celery nobody 7034 99.6 1.8 382688 74048 ? R 05:22 18:19 python2.7 celery worker -A myapp.celeryapp --loglevel=INFO nobody 7039 0.0 1.3 246672 55664 ? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel=INFO nobody 7040 0.0 1.3 246672 55632 ? S 05:22 0:00 python2.7 celery worker -A myapp.celeryapp --loglevel=INFO

Settings:

CELERY_BROKER_URL = 'sqs://%s:%s@' % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.replace('/', '%2F')) CELERY_BROKER_TRANSPORT = 'sqs' CELERY_BROKER_TRANSPORT_OPTIONS = { 'region': 'us-east-1', 'visibility_timeout': 60 * 30, 'polling_interval': 0.3, 'queue_name_prefix': 'myapp-', } CELERY_BROKER_HEARTBEAT = 0 CELERY_BROKER_POOL_LIMIT = 1 CELERY_BROKER_CONNECTION_TIMEOUT = 10 CELERY_DEFAULT_QUEUE = 'myapp' CELERY_QUEUES = ( Queue('myapp', Exchange('default'), routing_key='default'), ) CELERY_ALWAYS_EAGER = False CELERY_ACKS_LATE = True CELERY_TASK_PUBLISH_RETRY = True CELERY_DISABLE_RATE_LIMITS = False CELERY_IGNORE_RESULT = True CELERY_SEND_TASK_ERROR_EMAILS = False CELERY_TASK_RESULT_EXPIRES = 600 CELERY_RESULT_BACKEND = 'django-db' CELERY_TIMEZONE = TIME_ZONE CELERY_TASK_SERIALIZER = 'json' CELERY_ACCEPT_CONTENT = ['application/json'] CELERYD_PID_FILE = "/var/celery_%N.pid" CELERYD_HIJACK_ROOT_LOGGER = False CELERYD_PREFETCH_MULTIPLIER = 1 CELERYD_MAX_TASKS_PER_CHILD = 1000

Report:

$ celery report -A myapp.celeryapp software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.12 billiard:3.5.0.2 sqs:N/A platform -> system:Linux arch:64bit, ELF imp:CPython loader -> celery.loaders.app.AppLoader settings -> transport:sqs results:django-db

解决方案

I was also getting same issue. After little bit for searching i found solution to add --without-gossip --without-mingle --without-heartbeat -Ofair to the Celery worker command line. So in your case your worker command should be celery worker -A myapp.celeryapp --loglevel=INFO --without-gossip --without-mingle --without-heartbeat -Ofair

更多推荐

Celery任务已接收但未执行

本文发布于:2023-10-12 00:31:26,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1483274.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:但未   Celery

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!