"Is async Django production-ready?"

I get asked this question at least twice a month. The answer in 2026 is: mostly yes, but there are landmines, and you need to know exactly where they are before you step.

We have been running async Django views in production since Django 4.1. We have four projects currently using async views, and the results range from "3x throughput improvement" to "literally made things worse." The difference between these outcomes is not skill or code quality -- it is whether async was the right tool for that specific workload.

This post is the honest assessment I wish someone had written for me two years ago. Not the "async is the future!" hype piece, and not the "async Python is broken" hot take. Just the measured reality of what works, what does not, and how to decide for your project.

The Async Django Timeline

If you have been following async Django's development, it has been a long road. Here is where things stand as of Django 6.0:

Django Version Year Async Addition
3.0 2019 ASGI support, async views (basic)
3.1 2020 Async middleware, async tests
4.0 2021 sync_to_async / async_to_sync utilities
4.1 2022 Async ORM interface (QuerySet methods)
4.2 2023 Async signals, async cache framework
5.0 2023 Async form validation, async template rendering (partial)
5.1 2024 Async file handling, more ORM async methods
6.0 2025 Full async ORM (including aggregates, raw queries), async auth backends, async Tasks framework

The story of async Django is incremental progress over six years. Django 6.0 is the first version where you can write a fully async request cycle -- from middleware through view to ORM to response -- without hitting a synchronous bottleneck.

But "you can" is different from "you should." Let me show you when it actually matters.

What Actually Works Well

Async Views for I/O-Bound Work

The canonical async win: your view calls multiple external services, and you want them to run concurrently instead of sequentially.

Synchronous version (sequential):

# Total time: ~900ms (300ms + 300ms + 300ms)
def dashboard_view(request):
    user_stats = fetch_analytics_api(request.user.id)      # ~300ms
    notifications = fetch_notification_service(request.user) # ~300ms
    weather = fetch_weather_api(request.user.city)           # ~300ms

    return render(request, "dashboard.html", {
        "stats": user_stats,
        "notifications": notifications,
        "weather": weather,
    })

Async version (concurrent):

# Total time: ~320ms (all three run concurrently)
async def dashboard_view(request):
    user_stats, notifications, weather = await asyncio.gather(
        fetch_analytics_api(request.user.id),
        fetch_notification_service(request.user),
        fetch_weather_api(request.user.city),
    )

    return render(request, "dashboard.html", {
        "stats": user_stats,
        "notifications": notifications,
        "weather": weather,
    })

From ~900ms to ~320ms with almost no code change. This is the async sweet spot, and it is real. If your view makes three or more external API calls that can run in parallel, async views will measurably improve response times.

Async ORM with asyncio.gather

Django 6.0's async ORM finally makes this pattern viable with database queries too:

async def project_detail(request, project_id):
    # These three queries run concurrently
    project, recent_activity, team_members = await asyncio.gather(
        Project.objects.select_related("client").aget(id=project_id),
        Activity.objects.filter(project_id=project_id).order_by("-created_at")[:10].alist(),
        TeamMember.objects.filter(project_id=project_id).select_related("user").alist(),
    )

    return render(request, "projects/detail.html", {
        "project": project,
        "activity": recent_activity,
        "team": team_members,
    })

Important caveat: these queries still run on the same database connection pool. You are not magically getting more database connections. What you are getting is better utilization of your Python process -- while one query is waiting for PostgreSQL to respond, another query can be sent. This matters most when your database is on a separate server with non-trivial network latency.

Async Middleware

Async middleware is straightforward and works as expected:

class TimingMiddleware:
    def __init__(self, get_response):
        self.get_response = get_response
        if asyncio.iscoroutinefunction(get_response):
            self._is_async = True
        else:
            self._is_async = False

    async def __acall__(self, request):
        start = time.monotonic()
        response = await self.get_response(request)
        duration = time.monotonic() - start
        response["X-Request-Duration"] = f"{duration:.3f}s"
        return response

    def __call__(self, request):
        start = time.monotonic()
        response = self.get_response(request)
        duration = time.monotonic() - start
        response["X-Request-Duration"] = f"{duration:.3f}s"
        return response

Django handles the sync/async routing automatically. If your view is async, the __acall__ method is used. If sync, the __call__ method. You do not need to choose -- both can coexist.

What Does Not Work (Or Makes Things Worse)

CPU-Bound Work

This sounds obvious, but it catches people constantly. Async does not make computation faster. It makes waiting faster. If your view is slow because it is crunching numbers, async will not help:

# This is NOT faster as an async view
async def generate_report(request):
    data = await QuerySet.objects.filter(...).alist()
    # This CPU-bound work blocks the event loop
    # No other requests can be processed while this runs
    report = compute_complex_aggregations(data)  # 2 seconds of pure CPU
    pdf = render_pdf(report)  # 500ms of CPU
    return FileResponse(pdf)

In fact, this is worse as an async view because you are blocking the event loop. With sync views and gunicorn workers, at least other workers can handle requests while one is busy computing. With an async server and a blocked event loop, everything stalls.

The fix: use the Tasks framework for CPU-heavy work, or run it in a thread pool:

async def generate_report(request):
    data = await QuerySet.objects.filter(...).alist()
    # Run CPU work in a thread pool so we do not block the event loop
    report = await asyncio.to_thread(compute_complex_aggregations, data)
    pdf = await asyncio.to_thread(render_pdf, report)
    return FileResponse(pdf)

The sync_to_async Trap

This is the landmine that cost us two days of debugging on a real project. Here is the scenario:

You have a sync third-party library that you need to call from an async view. Django provides sync_to_async for this:

from asgiref.sync import sync_to_async

async def process_payment(request):
    # stripe-python is synchronous
    charge = await sync_to_async(stripe.Charge.create)(
        amount=1000,
        currency="usd",
        source=request.POST["token"],
    )
    return JsonResponse({"charge_id": charge.id})

Looks fine, right? It works. But here is what sync_to_async actually does: it runs the sync function in a thread pool. Every call creates or acquires a thread. Under load, with 100 concurrent requests each calling sync_to_async, you have 100 threads plus your async event loop.

We had an async view that called three different sync libraries via sync_to_async. Under load testing, this view was 3x slower than the equivalent sync view. It took us two days to figure out why: the thread pool overhead plus the context switching between the event loop and the thread pool was more expensive than just running everything synchronously.

The rule of thumb: if most of your view's work is calling sync libraries via sync_to_async, you are paying the thread overhead without getting the concurrency benefit. Just use a sync view.

Sync Third-Party Libraries

As of 2026, many popular Django packages still have sync-only code paths:

  • django-allauth -- auth flows are synchronous
  • django-filter -- queryset filtering is synchronous (but uses the ORM, which has async methods)
  • Older versions of django-rest-framework -- serializers are sync (DRF 4.x has async support)
  • Most payment SDKs -- Stripe, PayPal, etc. are sync Python libraries

When you use a sync library in an async view, Django automatically wraps it with sync_to_async internally. This works but adds the thread pool overhead mentioned above. The practical impact depends on how many sync calls you make per request.

ASGI vs. WSGI: Do Not Feel Bad About Staying on WSGI

One of the biggest misconceptions I see: "If I want async views, I need to switch to ASGI."

Technically true. But the decision is more nuanced than that.

Aspect WSGI (gunicorn) ASGI (uvicorn/daphne)
Async views No Yes
Sync views Yes (native) Yes (via thread pool)
Maturity Battle-tested, 15+ years Production-ready, but younger
Worker model Pre-fork (simple, predictable) Event loop (higher ceiling, more complexity)
Memory usage Predictable per-worker Lower baseline, but can spike with many connections
Deployment gunicorn myapp.wsgi uvicorn myapp.asgi:application
WebSocket support No Yes
SSE support Hacky Native
Monitoring Mature tooling (New Relic, Datadog) Good but slightly less mature
Failure mode Worker crashes, others continue Event loop stall affects all connections

My honest recommendation: if you are not using async views, WebSockets, or SSE, stay on WSGI. Gunicorn with sync workers is simpler to operate, simpler to debug, and has failure modes that are easier to reason about. A gunicorn worker that crashes takes down one request. An event loop stall in uvicorn affects every connection on that worker.

If you are using async views for the I/O-concurrency benefit, then yes, switch to ASGI. But do not switch just because it feels more modern.

# WSGI -- still perfectly fine for most Django projects
gunicorn myproject.wsgi:application \
    --workers 4 \
    --threads 2 \
    --timeout 30

# ASGI -- when you actually need async
uvicorn myproject.asgi:application \
    --workers 4 \
    --loop uvloop \
    --timeout-keep-alive 30

Real Benchmarks From Real Projects

Theoretical benchmarks are useless. Here are results from four actual projects we run, measured over a week of production traffic.

Project 1: API Gateway (Async Win)

An API aggregation service that fetches data from 3-5 upstream APIs per request and combines the results.

Metric Sync (gunicorn) Async (uvicorn)
p50 response time 890ms 310ms
p99 response time 2,400ms 820ms
Requests/sec (4 workers) 45 142
CPU usage 35% 28%
Memory usage 480MB 320MB

Verdict: Async is the clear winner. The view does almost nothing but wait for external APIs. asyncio.gather turns sequential waiting into parallel waiting. This is the textbook async use case.

Project 2: CRUD Application (No Difference)

A standard admin dashboard -- forms, lists, detail views. One database query per view on average.

Metric Sync (gunicorn) Async (uvicorn)
p50 response time 45ms 48ms
p99 response time 120ms 125ms
Requests/sec (4 workers) 580 565
CPU usage 22% 24%
Memory usage 390MB 370MB

Verdict: Zero meaningful difference. When each request makes one database query that takes 5-15ms, there is nothing to parallelize. The async overhead (event loop scheduling, async context managers) adds a few milliseconds. Not worth switching.

Project 3: Dashboard with External Integrations (Justified)

A project management dashboard that pulls data from GitHub, Jira, and Slack APIs alongside database queries.

Metric Sync (gunicorn) Async (uvicorn)
p50 response time 1,200ms 450ms
p99 response time 3,800ms 1,100ms
Requests/sec (4 workers) 28 85

Verdict: Async justified. Multiple external API calls per page load. The 2.7x improvement in p50 response time directly improves user experience.

Project 4: ML Inference API (Async Made Things Worse)

A Django API that loads a PyTorch model and runs inference. We thought async would help because we could process multiple inference requests concurrently.

Metric Sync (gunicorn) Async (uvicorn)
p50 response time 180ms 240ms
p99 response time 350ms 890ms
Requests/sec (4 workers) 120 85

Verdict: Async made things worse. Model inference is CPU-bound. Running it in an async view either blocks the event loop or requires sync_to_async/asyncio.to_thread, both of which add overhead. The sync worker model (one request per worker, OS handles scheduling) is simply better for CPU-bound work.

We ended up keeping this as a sync Django view and addressing throughput with more gunicorn workers. Simple, boring, effective. (For more on this pattern, see our post on Django as your AI backend.)

The Decision Framework

After four projects and a lot of benchmarking, here is our rule of thumb:

Count the external I/O calls per request.

  • 0-1 external calls: Stay synchronous. There is nothing to parallelize.
  • 2-3 external calls: Async is worth considering if those calls are slow (>100ms each).
  • 4+ external calls: Async will almost certainly help. The improvement scales with the number of concurrent waits.

"External I/O" means anything that leaves your process and waits for a response: HTTP API calls, database queries to a remote server, cache lookups to a remote Redis, file system reads on networked storage.

A database query to PostgreSQL on the same machine? That is technically I/O, but it completes in 1-5ms. Parallelizing ten 3ms queries saves you 27ms. Not nothing, but probably not worth the added complexity.

A database query to PostgreSQL on a separate server with 10ms network latency? Now each query takes 15-25ms, and parallelizing ten of them saves you 135-225ms. That is noticeable.

The bottom line: async Django in 2026 is a precision tool, not a default choice. Use it where the workload justifies it. For everything else, synchronous Django with gunicorn remains the right answer. There is no shame in that -- it is the architecture that powers most of Django's success stories, and it will continue to do so.


This is the third post in our Django in 2026 series. Previously: Django + HTMX. Next up: Django as your AI backend.


Trying to figure out whether async Django makes sense for your workload? At CODERCOPS we have benchmarked Django applications across every deployment pattern -- WSGI, ASGI, hybrid, and everything in between. Let us know what you are building and we will give you a straight answer, or explore more of our engineering deep dives for production-tested Django patterns.

Comments