With the impending release of Django 3.1, we have access to async web views in a widely deployed web framework with a very rich ecosystem.
What kinds of optimizations can we eke out of the system?
Here’s a common pattern that I think has a really easy speedup. The scenario involves making a fallible network request. If it fails, make another request to a fallback.
You may have seen this before, like when you have a cache in front of a database.
def main():
result, is_cached = get_from_cache()
if not is_cached:
result = get_from_db()
return result
But there’s a problem. This code doesn’t start the database request, until the cache request has, no matter the result, returned. In the case of a bad caching strategy, a cache miss might be very expensive, itself a long wait.
Here’s what’s now possible.
from asgiref.sync import async_to_sync, sync_to_async
from asyncio import CancelledError, create_task, sleep
from random import SystemRandom
@async_to_sync
async def main():
# Call two functions at the same time
from_cache = create_task(get_from_cache())
from_db = create_task(get_from_db())
# Read the first result
result, is_cached = await from_cache
if not is_cached:
result = await from_db
return result
# Cache was successful, stop the db query
from_db.cancel()
try:
# not using the db response, even if it has finished
_ = await from_db
print("db response completed. do you even need a cache?")
except CancelledError:
print("db query is cancelled")
return result
async def get_from_cache():
if random_bool():
await sleep(1)
simulated_failure = random_bool()
return "from_cache", simulated_failure
async def get_from_db():
if random_bool():
await sleep(2)
return "from_db"
def random_bool():
return bool(SystemRandom().getrandbits(1))
if __name__ == '__main__':
# `main()` can be called synchronously
# because of the magic that is `async_to_sync`
print(main())
Okay, this entire entry was so that I could play with async_to_sync.