Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run tasks under Windows #4081

Closed
SPKorhonen opened this issue Jun 8, 2017 · 15 comments
Closed

Unable to run tasks under Windows #4081

SPKorhonen opened this issue Jun 8, 2017 · 15 comments

Comments

@SPKorhonen
Copy link

Celery 4.x starts (with fixes #4078) but all tasks crash

Steps to reproduce

Use First Steps tutorial (http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html)

celery -A tasks worker --loglevel=info
add.delay(2,2)

Expected behavior

Task is executed and a result of 4 is produced

Actual behavior

Celery crashes.

"C:\Program Files\Python36\Scripts\celery.exe" -A perse.celery worker -l info

-------------- celery@PETRUS v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.14393-SP0 2017-06-08 15:31:22
-- * - **** ---

  • ** ---------- [config]
  • ** ---------- .> app: perse:0x24eecc088d0
  • ** ---------- .> transport: amqp://guest:**@localhost:5672//
  • ** ---------- .> results: rpc://
  • *** --- * --- .> concurrency: 12 (prefork)
    -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
    --- ***** -----
    -------------- [queues]
    .> celery exchange=celery(direct) key=celery

[tasks]
. perse.tasks.celery_add

[2017-06-08 15:31:22,685: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2017-06-08 15:31:22,703: INFO/MainProcess] mingle: searching for neighbors
[2017-06-08 15:31:23,202: INFO/SpawnPoolWorker-5] child process 5124 calling self.run()
[2017-06-08 15:31:23,207: INFO/SpawnPoolWorker-4] child process 10848 calling self.run()
[2017-06-08 15:31:23,208: INFO/SpawnPoolWorker-10] child process 5296 calling self.run()
[2017-06-08 15:31:23,214: INFO/SpawnPoolWorker-1] child process 5752 calling self.run()
[2017-06-08 15:31:23,218: INFO/SpawnPoolWorker-3] child process 11868 calling self.run()
[2017-06-08 15:31:23,226: INFO/SpawnPoolWorker-11] child process 9544 calling self.run()
[2017-06-08 15:31:23,227: INFO/SpawnPoolWorker-6] child process 16332 calling self.run()
[2017-06-08 15:31:23,229: INFO/SpawnPoolWorker-8] child process 3384 calling self.run()
[2017-06-08 15:31:23,234: INFO/SpawnPoolWorker-12] child process 8020 calling self.run()
[2017-06-08 15:31:23,241: INFO/SpawnPoolWorker-9] child process 15612 calling self.run()
[2017-06-08 15:31:23,243: INFO/SpawnPoolWorker-7] child process 9896 calling self.run()
[2017-06-08 15:31:23,245: INFO/SpawnPoolWorker-2] child process 260 calling self.run()
[2017-06-08 15:31:23,730: INFO/MainProcess] mingle: all alone
[2017-06-08 15:31:23,747: INFO/MainProcess] celery@PETRUS ready.
[2017-06-08 15:31:49,412: INFO/MainProcess] Received task: perse.tasks.celery_add[524d788e-e024-493d-9ed9-4b009315fea3]
[2017-06-08 15:31:49,416: ERROR/MainProcess] Task handler raised error: ValueError('not enough values to unpack (expected 3, got 0)',)
Traceback (most recent call last):
File "c:\program files\python36\lib\site-packages\billiard\pool.py", line 359, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\program files\python36\lib\site-packages\celery\app\trace.py", line 518, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)

Fix

See pull request #4078

@drewdogg
Copy link

FWIW I worked around this by using the eventlet pool implementation ("-P eventlet" command line option).

@felixhao28
Copy link

@drewdogg's solution should be mentioned in the tutorial.

@fohrloop
Copy link

I have to confirm: This bug appears on

Celery 4.1.0
Windows 10 Enterprise 64 bit

when running command celery -A <mymodule> worker -l info

and the following workaround works:

pip install eventlet
celery -A <mymodule> worker -l info -P eventlet

@auvipy
Copy link
Member

auvipy commented Dec 6, 2017

it's enough to define FORKED_BY_MULTIPROCESSING=1 environment variable for the worker instance.

@vejei
Copy link

vejei commented Apr 17, 2018

@auvipy Work for me, thanks.

@wonderfulsuccess
Copy link

wonderfulsuccess commented Jul 28, 2018

@auvipy it really solve the problem : ) 👍
add:
import os
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1')
before define Celery instance is enougth

@auvipy
Copy link
Member

auvipy commented Aug 1, 2018

maybe this should be mentioned in docs? @wonderfulsuccess care to send a pull request?

@ajosecueto
Copy link

@wonderfulsuccess

Thanks So Much

@jesteban19
Copy link

@auvipy it really solve the problem : )
add:
import os
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1')
before define Celery instance is enougth

Thank it is worked!

@tristanbrown
Copy link

@auvipy if this is only one line of code to fix, then why not just fix it within celery, instead of using the docs to recommend the users implement a workaround? Why is a completely platform-breaking bug with such a simple fix still a problem after nearly 2 years?

@auvipy
Copy link
Member

auvipy commented Apr 12, 2019

where do want celery put this could? I believe this is well suited for windows specific instruction. if you want to fix it in code level, come with an appropriate PR.

@auvipy auvipy modified the milestones: v5.0.0, 4.7 May 10, 2019
@auvipy auvipy self-assigned this Jul 6, 2019
@auvipy auvipy modified the milestones: 4.7, 4.5 Jul 6, 2019
@venu13
Copy link

venu13 commented Sep 17, 2020

@auvipy it really solve the problem : ) 👍
add:
import os
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1')
before define Celery instance is enougth

You are awesome, thanks a ton!

@Juanes2499
Copy link

@auvipy I have been search an answer to this problem, I've spent a lot of time trying fix this, thank you so much

@auvipy auvipy modified the milestones: 4.5, 5.3 Feb 17, 2021
@auvipy auvipy removed this from the 5.3 milestone Aug 26, 2021
@auvipy auvipy closed this as completed Aug 26, 2021
@prashansag62
Copy link

prashansag62 commented Jan 26, 2022

it's enough to define FORKED_BY_MULTIPROCESSING=1 environment variable for the worker instance.

Will this not disable concurrency? since I am planning to use celery only for concurrency as a replacement for threads, should I then go for this solution?

@auvipy
Copy link
Member

auvipy commented Jan 27, 2022

you should. but in practice I will suggest to move to any unix-like system

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests