diff --git a/README.md b/README.md index 70b44cc..a8ebe86 100644 --- a/README.md +++ b/README.md @@ -55,7 +55,7 @@ Tested with oha -c 40 -z 5s http://localhost:8000/ (1 run for warmup and 3 runs Source code in [bench](https://github.com/cirospaciari/socketify.py/tree/main/bench)
Machine OS: Debian GNU/Linux bookworm/sid x86_64 Kernel: 6.0.0-2-amd64 CPU: Intel i7-7700HQ (8) @ 3.800GHz Memory: 32066MiB -> Today socketify have about 30% performance hit due to workarounds between asyncio + libuv, so we will got even faster! See more info in [this issue](https://github.com/cirospaciari/socketify.py/issues/18), in PyPy3 we can get about 186,760 req/s and 77,774 req/s in Python3 without these workarounds and almost double of the scores with PyPy3 in TechEmPower plaintext, Python3 and PyPy3 performance will improve when we migrate to [HPy](https://github.com/cirospaciari/socketify.py/issues/16). In TechEmPower benchmarks we are faster than japronto in plaintext (about 1,300k req/s using PyPy3 without workaround and about 770k req/s with the current state vs 582k from japronto you can follow details in [this discussion](https://github.com/cirospaciari/socketify.py/discussions/10) +> Today socketify have about 30% performance hit due to workarounds between asyncio + libuv, so we will got even faster! See more info in [this issue](https://github.com/cirospaciari/socketify.py/issues/18), Python3 and PyPy3 performance will improve when we migrate to [HPy](https://github.com/cirospaciari/socketify.py/issues/16). In TechEmPower benchmarks we are faster than japronto in plaintext (about 1,300k req/s using PyPy3 without workaround and about 770k req/s with the current state vs 582k from japronto you can follow details in [this discussion](https://github.com/cirospaciari/socketify.py/discussions/10) ## Install For macOS x64 & Silicon, Linux x64, Windows