← Back to home
nov 30

accept and reject code part 1

lesson 1 for coders to get better at accepting & rejecting code from AI models: models love to blurt out "async" functions all the time. you look at them and think well yeah, its supposed to be non blocking and accept it. here is why that's wrong: the functions that are being used inside the async function are synchronous. the whole point of non blocking functions is to keep the event loop free but that's not happening since all the operations inside the apparently async function are blocking it for their usage. what do you do then? make the functions inside asynchronous? well that's not something you can do all the time. imagine a function like: async def foo(): data = blocking_http_get(...) parsed = heavy_numpy_stuff(data) blocking_db_write(parsed) all the three functions are non yielding functions. the event loop is stuck inside these calls until they finish practically making "async" useless. that's exactly why "run_in_executor" exists. it shoves the blocking work to a thread pool so your async event loop stays free i.e. run this blocking function in another thread and tell me when it's done. so when can you truly use async? when the operations inside the async functions are also async. for example: aiohttp.get(...), supabase.query(...), websocket.recv(), asyncio.sleep() etc. for all the other operations involving heavy CPU work or libraries with no async versions you must wrap them like: loop = asyncio.get_running_loop() result = await loop. run_in_executor(None,my_blocking_func) but also don't misunderstand run_in_executor to make python parallel. you are just preventing the event loop from blocking. true parallelism can only be achieved by multiprocessing, offloading to rust/c++ kernels or pushing to a remote queue such as celery. let's get better at accepting code :)
loading comments...