{"id":926,"hash":"2c80cebb4584654114ce667fc40b2166043ead183327623ff87341f8e9ab366b","pattern":"Celery: WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL)","full_message":"I use Celery with RabbitMQ in my Django app (on Elastic Beanstalk) to manage background tasks and I daemonized it using Supervisor.\nThe problem now, is that one of the period task that I defined is failing (after a week in which it worked properly), the error I've got is:\n\n[01/Apr/2014 23:04:03] [ERROR] [celery.worker.job:272] Task clean-dead-sessions[1bfb5a0a-7914-4623-8b5b-35fc68443d2e] raised unexpected: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL).',)\nTraceback (most recent call last):\n  File \"/opt/python/run/venv/lib/python2.7/site-packages/billiard/pool.py\", line 1168, in mark_as_worker_lost\n    human_status(exitcode)),\nWorkerLostError: Worker exited prematurely: signal 9 (SIGKILL).\n\nAll the processes managed by supervisor are up and running properly (supervisorctl status says RUNNNING).\n\nI tried to read several logs on my ec2 instance but no one seems to help me in finding out what is the cause of the SIGKILL. What should I do? How can I investigate?\n\nThese are my celery settings:\n\nCELERY_TIMEZONE = 'UTC'\nCELERY_TASK_SERIALIZER = 'json'\nCELERY_ACCEPT_CONTENT = ['json']\nBROKER_URL = os.environ['RABBITMQ_URL']\nCELERY_IGNORE_RESULT = True\nCELERY_DISABLE_RATE_LIMITS = False\nCELERYD_HIJACK_ROOT_LOGGER = False\n\nAnd this is my supervisord.conf:\n\n[program:celery_worker]\nenvironment=$env_variables\ndirectory=/opt/python/current/app\ncommand=/opt/python/run/venv/bin/celery worker -A com.cygora -l info --pidfile=/opt/python/run/celery_worker.pid\nstartsecs=10\nstopwaitsecs=60\nstopasgroup=true\nkillasgroup=true\nautostart=true\nautorestart=true\nstdout_logfile=/opt/python/log/celery_worker.stdout.log\nstdout_logfile_maxbytes=5MB\nstdout_logfile_backups=10\nstderr_logfile=/opt/python/log/celery_worker.stderr.log\nstderr_logfile_maxbytes=5MB\nstderr_logfile_backups=10\nnumprocs=1\n\n[program:celery_beat]\nenvironment=$env_variables\ndirectory=/opt/python/current/app\ncommand=/opt/python/run/venv/bin/celery beat -A com.cygora -l info --pidfile=/opt/python/run/celery_beat.pid --schedule=/opt/python/run/celery_beat_schedule\nstartsecs=10\nstopwaitsecs=300\nstopasgroup=true\nkillasgroup=true\nautostart=false\nautorestart=true\nstdout_logfile=/opt/python/log/celery_beat.stdout.log\nstdout_logfile_maxbytes=5MB\nstdout_logfile_backups=10\nstderr_logfile=/opt/python/log/celery_beat.stderr.log\nstderr_logfile_maxbytes=5MB\nstderr_logfile_backups=10\nnumprocs=1\n\nEdit 1\n\nAfter restarting celery beat the problem remains.\n\nEdit 2\n\nChanged killasgroup=true to killasgroup=false and the problem remains.","ecosystem":"pypi","package_name":"django","package_version":null,"solution":"The SIGKILL your worker received was initiated by another process. Your supervisord config looks fine, and the killasgroup would only affect a supervisor initiated kill (e.g. the ctl or a plugin) - and without that setting it would have sent the signal to the dispatcher anyway, not the child.\n\nMost likely you have a memory leak and the OS's oomkiller is assassinating your process for bad behavior. \n\ngrep oom /var/log/messages. If you see messages, that's your problem.\n\nIf you don't find anything, try running the periodic process manually in a shell:\n\nMyPeriodicTask().run()\n\nAnd see what happens. I'd monitor system and process metrics from top in another terminal, if you don't have good instrumentation like cactus, ganglia, etc for this host.","confidence":0.95,"source":"stackoverflow","source_url":"https://stackoverflow.com/questions/22805079/celery-workerlosterror-worker-exited-prematurely-signal-9-sigkill","votes":79,"created_at":"2026-04-19T04:52:02.697218+00:00","updated_at":"2026-04-19T04:52:02.697218+00:00"}