Skip to content

arq

Package: z4j-arq - arq 0.25+.

Terminal window
pip install z4j-arq

arq doesn’t expose signals. z4j uses worker on_job_start / on_job_end hooks and wraps the enqueue_job method.

Sourcez4j event
ArqRedis.enqueue_jobtask_sent
on_job_start hooktask_started
on_job_end (success=True)task_succeeded
on_job_end (success=False)task_failed
retries (tried > 1)task_retry (synthetic)

arq’s config is a WorkerSettings class. z4j injects hooks:

from z4j_arq import z4j_worker_settings
class WorkerSettings(z4j_worker_settings(MyBaseSettings)):
functions = [my_task, ...]
# z4j's on_startup / on_shutdown / on_job_start / on_job_end are merged

Or manual:

from z4j_arq import on_startup, on_shutdown, on_job_start, on_job_end
class WorkerSettings:
functions = [...]
on_startup = on_startup
on_shutdown = on_shutdown
on_job_start = on_job_start
on_job_end = on_job_end
VerbHow
retrypolyfill - arq_redis.enqueue_job(function_name, *args) with original payload
cancelarq_redis.abort_job(job_id) - works pre-start; mid-run relies on your function’s cancellation cooperation
purge_queuearq_redis.flushdb() (scoped to arq prefix)

arq’s cron jobs are defined in WorkerSettings.cron_jobs. Same as Huey - code-only discovery, read-only in v1.0. See scheduler: arq-cron.

  • No chord/group.
  • Max job size limited by Redis payload; large pickled args may be truncated in events (see redaction).
  • Worker pool size appears as metadata.max_jobs in the agent drawer.
# FastAPI env-based
Z4J_ARQ_REDIS = "redis://localhost:6379/0"

Or pass the ArqRedis instance explicitly via the adapter’s redis= kwarg if you need it.