We migrated three production projects from Celery to Django 6.0's new built-in Tasks framework in the last two months. Two went smoothly -- genuinely smooth, the kind where you finish early and feel suspicious. One was a disaster that cost us a weekend of debugging and a very awkward Slack message to a client explaining why their nightly report emails had stopped for 48 hours.

This is not the changelog blog post that tells you what the Tasks framework is and why it is exciting. You can read the Django docs for that. This is the post I wish I had found before we started migrating -- the one that tells you exactly what breaks, what surprises you, and when you should absolutely not rip out Celery.

If you are running Celery in production and wondering whether Django 6.0's Tasks framework means you can finally ditch the broker, the answer is: probably yes, but not the way you think.

What the Tasks Framework Actually Is (And What It Is Not)

I stared at the Django 6.0 release notes for a solid twenty minutes when they dropped, trying to figure out if this was actually what I thought it was. It felt too good to be true -- Django shipping a built-in async task system after years of the community asking for one.

Here is the core of it. You define a task:

# tasks.py
from django.tasks import task

@task()
def send_welcome_email(user_id: int):
    user = User.objects.get(id=user_id)
    # Your email sending logic here
    send_templated_email(
        to=user.email,
        template="welcome",
        context={"name": user.first_name}
    )

And you enqueue it:

# views.py
from .tasks import send_welcome_email

def register_view(request):
    user = User.objects.create_user(...)
    send_welcome_email.enqueue(user.id)  # Returns immediately
    return redirect("dashboard")

That is it. No broker configuration. No CELERY_BROKER_URL. No separate worker process to forget to start. No celery -A proj worker -l info in another terminal tab that you inevitably close by accident.

But here is what it is not: It is not a drop-in Celery replacement. The Tasks framework is deliberately simple. Django's philosophy has always been "batteries included, but the right batteries." They built the 80% case and left escape hatches for the rest.

The key concepts:

  • @task() decorator -- marks a function as a task. It can still be called normally (synchronous), or enqueued for background execution.
  • enqueue() -- pushes the task to whatever backend you have configured. Returns a TaskResult you can check later.
  • Backends -- pluggable execution engines. Django ships with ImmediateBackend (runs synchronously, great for testing) and DummyBackend (discards, great for CI). Third-party backends provide the real workers.
  • TaskResult -- a handle to track status, retrieve return values, or check for errors.

The thing that tripped me up initially: there is no built-in production backend in Django itself. Django provides the framework and the API. You need a third-party backend for actual background execution. This is intentional -- the Django team did not want to ship and maintain a message broker or worker pool.

The Third-Party Backend Ecosystem

This is where things get interesting. Within weeks of the Django 6.0 release, the community shipped several backends:

Backend Broker Best For Maturity
django-tasks-database PostgreSQL (your existing DB) Small-to-medium projects, simple queues Stable
django-tasks-threads In-process thread pool Dev/test, very light background work Stable
django-tasks-celery Redis/RabbitMQ (via Celery) Projects already running Celery infrastructure Beta
django-tasks-huey Redis/SQLite (via Huey) Lightweight Redis-based queuing Beta

The one most people actually want is django-tasks-database. It uses your existing PostgreSQL database as the task queue. Yes, I know -- using your database as a message broker sounds like a terrible idea, and five years ago it was. But PostgreSQL's SKIP LOCKED and LISTEN/NOTIFY have matured to the point where this is genuinely viable for most workloads.

# settings.py
INSTALLED_APPS = [
    # ...
    "django_tasks",
    "django_tasks_database",
]

TASKS = {
    "default": {
        "BACKEND": "django_tasks_database.DatabaseBackend",
        "QUEUES": ["default", "email", "reports"],
    }
}

Then you run the worker:

python manage.py db_worker --queue default --queue email

This is the "aha" moment. One manage.py command instead of a whole Celery ecosystem. No Redis. No RabbitMQ. No celerybeat. Just your Django project and your PostgreSQL database.

For the django-tasks-celery adapter -- this is the bridge for teams that cannot migrate overnight. It routes Django Tasks API calls to your existing Celery infrastructure. You get the new API without changing your deployment:

# settings.py -- gradual migration path
TASKS = {
    "default": {
        "BACKEND": "django_tasks_celery.CeleryBackend",
    }
}

The Migration Story: 47 Tasks, 3 Projects, 1 Weekend

Let me walk you through what actually happened when we migrated. Our three projects:

  1. Project A -- SaaS dashboard. 12 Celery tasks, mostly email and PDF generation. Standard stuff.
  2. Project B -- E-commerce platform. 8 Celery tasks, order processing and inventory sync.
  3. Project C -- Data pipeline. 47 Celery tasks, heavy use of chords, chains, periodic tasks, and retry logic.

Project A: The Easy Win

Twelve tasks, no chains, no chords, no periodic scheduling. We literally did a find-and-replace:

Before (Celery):

from celery import shared_task

@shared_task(bind=True, max_retries=3)
def generate_monthly_report(self, org_id: int, month: str):
    try:
        report = build_report(org_id, month)
        upload_to_s3(report)
        notify_admin(org_id, report.url)
    except S3Error as exc:
        raise self.retry(exc=exc, countdown=60)

After (Django Tasks):

from django.tasks import task

@task(queue="reports")
def generate_monthly_report(org_id: int, month: str):
    report = build_report(org_id, month)
    upload_to_s3(report)
    notify_admin(org_id, report.url)

Wait -- where did the retry logic go? I will get to that. This is one of the things that bit us.

Migration time: 4 hours, including testing. We removed Redis from the Docker Compose, dropped the Celery worker service, and saved about $15/month on the Redis instance. Not life-changing money, but simplifying the infrastructure felt great.

Project B: Minor Surprises

The e-commerce project was mostly straightforward, but we hit two issues:

Issue 1: Task serialization differences. Celery uses pickle or JSON serialization with its own type handling. Django Tasks uses JSON by default, and it is stricter about what you can pass. We had one task that was passing a Decimal object directly:

# This worked with Celery's pickle serializer
process_refund.delay(order_id=123, amount=Decimal("49.99"))

# Django Tasks with JSON backend -- Decimal is not JSON serializable
process_refund.enqueue(order_id=123, amount=Decimal("49.99"))  # TypeError!

The fix was simple -- pass strings and convert inside the task -- but it was the kind of thing that only shows up when you actually run the code with production data.

Issue 2: No countdown parameter. Celery lets you do task.apply_async(countdown=300) to delay execution by 5 minutes. Django Tasks does not have this out of the box. The django-tasks-database backend added an eta parameter, but it works slightly differently:

from datetime import timedelta
from django.utils import timezone

# Schedule for 5 minutes from now
send_shipping_notification.enqueue(
    order_id=123,
    eta=timezone.now() + timedelta(minutes=5)
)

Migration time: 6 hours. Still straightforward, but required more careful testing.

Project C: The Disaster

This is the one with 47 tasks. And this is where I learned that "migrate to Django Tasks" is not a one-size-fits-all proposition.

What broke:

  1. Chords and chains do not exist. Celery's chord(tasks, callback) and chain(task1, task2, task3) have no equivalent in Django Tasks. Our data pipeline used chords extensively -- "run these 8 data fetch tasks in parallel, then when all complete, run the aggregation task." We had to rewrite this logic manually using task status polling and a coordinator task.

  2. Periodic tasks need a different solution. Celery Beat handled our 15 periodic tasks (hourly syncs, daily reports, weekly cleanups). Django Tasks has no built-in scheduler. We ended up using django-tasks-scheduler (a third-party extension) but it was brand new and we hit two bugs in the first week.

  3. Priority queues work differently. We relied on Celery's priority routing to ensure payment processing tasks jumped ahead of report generation. The database backend supports basic queue separation but not intra-queue priority in the same way.

  4. The nightly report failure. This was the worst one. We had a task that used Celery's self.request.retries to implement exponential backoff. When we migrated it, we forgot to re-implement the retry logic entirely. The task would fail on a transient database timeout, and instead of retrying, it just... stopped. For 48 hours. On a Friday night. The client noticed before we did on Monday morning.

Migration time: 3 weeks (estimated 1 week). We ended up using the django-tasks-celery bridge for the complex pipeline tasks and only migrating the simple fire-and-forget tasks to the database backend.

When Django Tasks Replaces Celery (And When It Does Not)

After going through this three times, here is my honest decision framework:

Use Django Tasks (with database backend) when:

  • Your tasks are fire-and-forget (emails, notifications, webhooks)
  • You have fewer than ~50 tasks executing per minute
  • You do not need chords, chains, or complex workflows
  • You want to eliminate Redis/RabbitMQ from your stack
  • Your team is small and values simplicity over flexibility

Stay on Celery when:

  • You use chords, chains, or groups extensively
  • You process hundreds of tasks per minute (database-as-queue has limits)
  • You need priority queues with sophisticated routing
  • You have complex retry patterns with dead letter queues
  • Your Celery setup is already stable and well-understood

Use the Celery bridge when:

  • You want the new API but cannot change infrastructure
  • You are migrating gradually over months
  • Some tasks are simple (move to database backend) and some are complex (keep on Celery)

Here is the decision as a table, because tables are easier to argue about in code reviews:

Factor Django Tasks (DB) Celery
Setup complexity 5 minutes 30-60 minutes
Infrastructure needed PostgreSQL (you already have it) Redis or RabbitMQ + worker process
Task throughput ceiling ~50/min comfortably Thousands/min
Chords/chains Not built-in First-class support
Periodic tasks Third-party extension Celery Beat (mature)
Retry with backoff Manual (see below) Built-in decorator
Monitoring Django admin (basic) Flower (excellent)
Learning curve 15 minutes 2-4 hours
Community maturity Months old 12+ years

Production Patterns That Actually Work

Here are the patterns we settled on after the migration. These are not theoretical -- they are running in production right now.

Pattern 1: Retry with Exponential Backoff

Django Tasks does not have Celery's @shared_task(max_retries=3, default_retry_delay=60). You build it yourself:

from django.tasks import task
from datetime import timedelta
from django.utils import timezone

@task(queue="email")
def send_transactional_email(user_id: int, template: str, attempt: int = 0):
    max_attempts = 5
    if attempt >= max_attempts:
        # Log to error tracking, notify ops
        logger.error(f"Email task failed after {max_attempts} attempts: {user_id}")
        capture_exception(f"Email delivery failed for user {user_id}")
        return

    try:
        user = User.objects.get(id=user_id)
        deliver_email(user.email, template)
    except (SMTPError, ConnectionError) as exc:
        # Exponential backoff: 30s, 60s, 120s, 240s, 480s
        delay = timedelta(seconds=30 * (2 ** attempt))
        logger.warning(f"Email attempt {attempt + 1} failed, retrying in {delay}: {exc}")
        send_transactional_email.enqueue(
            user_id=user_id,
            template=template,
            attempt=attempt + 1,
            eta=timezone.now() + delay,
        )

Is this more code than Celery's decorator? Yes. Is it more explicit about what happens on failure? Also yes. I have grown to prefer this -- when a task's retry logic is spelled out, new team members understand it immediately instead of having to look up what default_retry_delay and retry_backoff do.

Pattern 2: Transaction-Safe Enqueueing

This one burned us in a subtle way. If you enqueue a task inside a database transaction, and the transaction rolls back, the task still runs -- because the task queue is a separate system from your application database.

Unless you use the database backend. Then your task enqueue is part of the same database transaction:

from django.db import transaction

def create_order(request):
    with transaction.atomic():
        order = Order.objects.create(
            user=request.user,
            items=cart_items,
            total=calculate_total(cart_items),
        )
        # This enqueue is part of the same transaction!
        # If the transaction rolls back, the task is never enqueued.
        send_order_confirmation.enqueue(order_id=order.id)
        charge_payment.enqueue(order_id=order.id)

    return redirect("order-success", order_id=order.id)

This is genuinely better than Celery. With Celery + Redis, you either enqueue before the commit (risky -- task runs but DB change might not exist yet) or enqueue after the commit (using transaction.on_commit, which is fragile and easy to forget). With the database backend, it just works.

Pattern 3: Graceful Degradation

What happens when your task worker goes down? With Celery, tasks pile up in Redis until memory runs out. With the database backend, tasks pile up in PostgreSQL, which handles disk-backed queuing gracefully:

from django.tasks import task

@task(queue="notifications")
def send_push_notification(user_id: int, message: str):
    try:
        deliver_push(user_id, message)
    except PushServiceUnavailable:
        # Notification service is down -- degrade gracefully
        # Fall back to email instead of retrying push indefinitely
        send_email_notification.enqueue(user_id=user_id, message=message)

Pattern 4: Testing Is Actually Pleasant

This is the sleeper hit of the Tasks framework. Testing Celery tasks always felt hacky -- you either mock the .delay() call or use CELERY_ALWAYS_EAGER=True (which hides async-specific bugs). Django Tasks has the ImmediateBackend that runs tasks synchronously in your test:

# settings/test.py
TASKS = {
    "default": {
        "BACKEND": "django.tasks.backends.ImmediateBackend",
    }
}
# test_tasks.py
from django.test import TestCase, override_settings

class OrderTasksTest(TestCase):
    def test_order_confirmation_sent(self):
        order = OrderFactory.create()
        # This runs synchronously in tests -- no mocking needed
        send_order_confirmation.enqueue(order_id=order.id)
        # Assert the email was actually sent
        self.assertEqual(len(mail.outbox), 1)
        self.assertIn(order.id, mail.outbox[0].body)

    @override_settings(TASKS={"default": {"BACKEND": "django.tasks.backends.DummyBackend"}})
    def test_view_enqueues_task(self):
        # DummyBackend discards -- just verify the view does not crash
        response = self.client.post("/orders/create/", data={...})
        self.assertEqual(response.status_code, 302)

Compare this to the Celery testing dance of @patch('myapp.tasks.send_email.delay') everywhere. Night and day.

The Verdict After Three Months

Three months into running Django Tasks in production, here is where we landed:

Project A (simple SaaS): Running entirely on django-tasks-database. Zero issues. Removed Redis completely. The junior developer who joined last month set up a new task in 10 minutes without reading any documentation beyond the Django docs. That last point is the biggest win -- onboarding. Celery's learning curve is real. "What is a broker? Why do I need Redis? What is this celery -A command?" None of that exists anymore.

Project B (e-commerce): Running on django-tasks-database for most tasks, with a manual coordinator pattern replacing the one place we used a Celery chain. Working well.

Project C (data pipeline): Running a hybrid -- simple tasks on the database backend, complex pipeline tasks still on Celery via the bridge adapter. This was the right call. We will probably migrate the pipeline tasks once the third-party ecosystem matures, but there is no rush.

Infrastructure savings across all three: We removed Redis from two projects. One fewer thing to monitor, patch, and pay for. The cost savings were modest (~$30/month combined), but the operational simplicity gain was significant.

My prediction: Within two years, most new Django projects will use the Tasks framework instead of Celery. Celery will remain important for high-throughput and complex workflow use cases, but for the vast majority of "run this thing in the background" needs, Django Tasks with the database backend is simply better. It is simpler, it is transactionally safe, and it is one less thing to explain to new developers.

The Django team got this right. They did not try to replace Celery -- they built the thing that 80% of projects actually needed.


This is the first post in our Django in 2026 series. Next up: Django + HTMX -- building interactive apps without the JavaScript bloat.


Running a Django project and wondering whether the Tasks framework makes sense for your workload? At CODERCOPS we have been migrating production Django applications for years -- we know which patterns survive contact with real traffic. Get in touch for an honest assessment of your stack, or check out our other engineering deep dives for more battle-tested advice.

Comments