I have three Celery workers running on the same server, each running as a daemon with supervisord. I'm using RabbitMQ for the scheduling backend.
The Celery workers run tasks for a multi-tenant Django project. The supervisord conf file for each specifies an environment variable that instructions Django which settings file to load for the appropriate tenant. All three Django settings have a different BROKER_URL defined, specifying the proper BROKER_VHOST.
I'm testing by trying to queue an email-sending Celery task from each separate Django front-ends. I'm not receiving any explicit errors, but on each Django site, the task only seems to successfully execute 33% of the time. I can't find any proof in the /var/log/celery-* log files, but this seems suspiciously like each Celery worker is randomly grabbing a task not assigned to their tenant, seeing the VHOST doesn't match, and dropping it. Since there's only three of them, there's a 1 in 3 chance the task will be grabbed by the correct worker.
How do I fix this? Why is the Celery worker grabbing tasks from a different BROKER_VHOST?