I recently saw a benchmark comparing request per second between nodejs and FastAPI on two different scenarios.
The first scenario is simpler and more synthetic. nodejs can handle 20x more requests and with better latency and cpu load.
On the second scenario is uses a DB and memcached. nodejs still comes out on top with 4x more requests and again better latencies and cpu load.
What I’m interested in knowing is why is that so?
I’ve read that it’s because of the javascript JIT. But Python now also has JIT.
Another argument I’ve seen is that “it is because of JS event loop which is very optimized”. Even though in both nodejs and asyncio the code executed is single threaded with concurrency being handled by the event loop.
Does it really boils down to a faster JIT and event loop? Or is there something related to the GIL or multiprocessing too?
The benchmark was done with Python 3.13 with gunicorn with uvicorn. For the details of the benchmark see https://github.com/antonputra/tutorials/tree/236/lessons/236
Thanks
submitted by /u/hideo_kuze_
[link] [comments]
r/learnpython I recently saw a benchmark comparing request per second between nodejs and FastAPI on two different scenarios. The first scenario is simpler and more synthetic. nodejs can handle 20x more requests and with better latency and cpu load. On the second scenario is uses a DB and memcached. nodejs still comes out on top with 4x more requests and again better latencies and cpu load. What I’m interested in knowing is why is that so? I’ve read that it’s because of the javascript JIT. But Python now also has JIT. Another argument I’ve seen is that “it is because of JS event loop which is very optimized”. Even though in both nodejs and asyncio the code executed is single threaded with concurrency being handled by the event loop. Does it really boils down to a faster JIT and event loop? Or is there something related to the GIL or multiprocessing too? The benchmark was done with Python 3.13 with gunicorn with uvicorn. For the details of the benchmark see https://github.com/antonputra/tutorials/tree/236/lessons/236 Thanks submitted by /u/hideo_kuze_ [link] [comments]
I recently saw a benchmark comparing request per second between nodejs and FastAPI on two different scenarios.
The first scenario is simpler and more synthetic. nodejs can handle 20x more requests and with better latency and cpu load.
On the second scenario is uses a DB and memcached. nodejs still comes out on top with 4x more requests and again better latencies and cpu load.
What I’m interested in knowing is why is that so?
I’ve read that it’s because of the javascript JIT. But Python now also has JIT.
Another argument I’ve seen is that “it is because of JS event loop which is very optimized”. Even though in both nodejs and asyncio the code executed is single threaded with concurrency being handled by the event loop.
Does it really boils down to a faster JIT and event loop? Or is there something related to the GIL or multiprocessing too?
The benchmark was done with Python 3.13 with gunicorn with uvicorn. For the details of the benchmark see https://github.com/antonputra/tutorials/tree/236/lessons/236
Thanks
submitted by /u/hideo_kuze_
[link] [comments]