diff --git a/README.md b/README.md index 896be5e..7a90a92 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ ## 💡 Features - WebSocket with pub/sub support -- Fast and realiable Http/Https +- Fast and reliable Http/Https - Support for Windows, Linux and macOS Silicon & x64 - Support for [`PyPy3`](https://www.pypy.org/) and [`CPython`](https://github.com/python/cpython) - Dynamic URL Routing with Wildcard & Parameter support @@ -68,7 +68,7 @@ Socketify got almost 900k messages/s with PyPy3 and 860k with Python3 the same p Runtime versions: PyPy3 7.3.9, Python 3.10.7, node v16.17.0, bun v0.2.2
Framework versions: gunicorn 20.1.0 + uvicorn 0.19.0, socketify alpha, gunicorn 20.1.0 + falcon 3.1.0, robyn 0.18.3
-Http tested with oha -c 40 -z 5s http://localhost:8000/ (1 run for warmup and 3 runs average for testing)
+Http tested with oha -c 40 -z 5s http://localhost:8000/ (1 run for warm-up and 3 runs average for testing)
WebSocket tested with [Bun.sh](https://bun.sh) bench chat-client
Source code in [bench](https://github.com/cirospaciari/socketify.py/tree/main/bench)
@@ -205,7 +205,7 @@ And yes, we can be faster than japronto when all our features and goals are achi We don't use uvloop, because uvloop don't support Windows and PyPy3 at this moment, this can change in the future, but right now we want to implement our own libuv + asyncio solution, and a lot more. ## :dizzy: CFFI vs Cython vs HPy -Cython performs really well on Python3 but really bad on PyPy3, CFFI are choosen for better support PyPy3 until we got our hands on an stable [`HPy`](https://hpyproject.org/) integration. +Cython performs really well on Python3 but really bad on PyPy3, CFFI are chosen for better support PyPy3 until we got our hands on an stable [`HPy`](https://hpyproject.org/) integration. ## :bookmark_tabs: Documentation See the full docs in [docs.socketify.dev](https://docs.socketify.dev) or in [/docs/README.md](docs/README.md) diff --git a/SSGI.md b/SSGI.md index a79b5b7..0f21b05 100644 --- a/SSGI.md +++ b/SSGI.md @@ -13,7 +13,7 @@ class SSGIHttpResponse: pass # send chunk of data, can be used to perform with less backpressure than using send - # total_size is the sum of all lenghts in bytes of all chunks to be sended + # total_size is the sum of all lengths in bytes of all chunks to be sended # connection will end when total_size is met # returns tuple(bool, bool) first bool represents if the chunk is succefully sended, the second if the connection has ended def send_chunk(self, chunk: Union[str, bytes, bytearray, memoryview], total_size: int = False) -> Awaitable: @@ -32,7 +32,7 @@ class SSGIHttpResponse: pass # get an all data - # returns an BytesIO() or None if no payload is availabl + # returns an BytesIO() or None if no payload is available def get_data(self) -> Awaitable: pass @@ -41,7 +41,7 @@ class SSGIHttpResponse: def get_chunk(self) -> Awaitable: pass - # on aborted event, calle when the connection abort + # on aborted event, called when the connection abort def on_aborted(self, handler: Union[Awaitable, Callable], *arguments): pass diff --git a/bench/falcon_plaintext.py b/bench/falcon_plaintext.py index 062c43b..4cbc994 100644 --- a/bench/falcon_plaintext.py +++ b/bench/falcon_plaintext.py @@ -22,7 +22,7 @@ if __name__ == "__main__": # Serve until process is killed httpd.serve_forever() -# pypy3 -m gunicorn falcon_plaintext:app -w 4 --worker-class=gevent #recomended for pypy3 +# pypy3 -m gunicorn falcon_plaintext:app -w 4 --worker-class=gevent #recommended for pypy3 # python3 -m gunicorn falcon_plaintext:app -w 4 #without Cython # pypy3 -m gunicorn falcon_plaintext:app -w 4 #without gevent # python3 -m gunicorn falcon_plaintext:app -w 4 --worker-class="egg:meinheld#gunicorn_worker" #with Cython diff --git a/docs/basics.md b/docs/basics.md index 8699b9c..22d96a8 100644 --- a/docs/basics.md +++ b/docs/basics.md @@ -110,7 +110,7 @@ We also have an `req.get_cookie(cookie_name)` to get a cookie value as String an ```python def cookies(res, req): - # cookies are writen after end + # cookies are written after end res.set_cookie( "session_id", "1234567890", @@ -192,6 +192,6 @@ If you need to access the raw pointer of `libuv` you can use `app.get_native_han ## Preserve data for use after await HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await. -If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality. +If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penalty. ### Next [Upload and Post](upload-post.md) \ No newline at end of file diff --git a/docs/corking.md b/docs/corking.md index 5c77bc5..e6a87e1 100644 --- a/docs/corking.md +++ b/docs/corking.md @@ -10,7 +10,7 @@ If you have callbacks registered to some other library, say libhiredis, those ca Only one single socket can be corked at any point in time (isolated per thread, of course). It is efficient to cork-and-uncork. Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await. -If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality. +If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penalty. Corking is important even for calls which seem to be "atomic" and only send one chunk. res.end, res.try_end, res.write_status, res.write_header will most likely send multiple chunks of data and is very important to properly cork. @@ -36,6 +36,6 @@ async def home(res, req): ``` > You cannot use async inside cork but, you can cork only when you need to send the response after all the async happens -For convinience we have `res.cork_end()`, `ws.cork_send()` that will cork and call end for you, and also `res.render()` that will always response using `res.cork_end()` to send your HTML / Data +For convenience we have `res.cork_end()`, `ws.cork_send()` that will cork and call end for you, and also `res.render()` that will always response using `res.cork_end()` to send your HTML / Data ### Next [Routes](routes.md) \ No newline at end of file diff --git a/docs/graphiql.md b/docs/graphiql.md index 16cd1ca..3bf485f 100644 --- a/docs/graphiql.md +++ b/docs/graphiql.md @@ -54,7 +54,7 @@ def graphiql_from(Query, Mutation=None): # we can pass whatever we want to context, query, headers or params, cookies etc context_value = req.preserve() - # get all incomming data and parses as json + # get all incoming data and parses as json body = await res.get_json() query = body["query"] diff --git a/docs/routes.md b/docs/routes.md index a19ddeb..0f0521d 100644 --- a/docs/routes.md +++ b/docs/routes.md @@ -22,7 +22,7 @@ async def home(res, req): app.post("/", home) ``` -> Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await. If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality. Take a look in [Corking](corking.md) for get a more in deph information +> Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await. If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penalty. Take a look in [Corking](corking.md) for get a more in deph information ## Pattern matching Routes are matched in order of specificity, not by the order you register them: @@ -58,7 +58,7 @@ app.any("/*", not_found) ``` ## Error handler -In case of some uncaught exceptions we will always try our best to call the error handler, you can set the handler using `app.set_error_handler(hanlder)` +In case of some uncaught exceptions we will always try our best to call the error handler, you can set the handler using `app.set_error_handler(handler)` ```python diff --git a/docs/ssl.md b/docs/ssl.md index f8d397c..d1a859e 100644 --- a/docs/ssl.md +++ b/docs/ssl.md @@ -1,3 +1,3 @@ -Support is already there, docs Comming soon... +Support is already there, docs Coming soon... ### Next [API Reference](api.md) \ No newline at end of file diff --git a/examples/async.py b/examples/async.py index e366415..fc4f11d 100644 --- a/examples/async.py +++ b/examples/async.py @@ -15,7 +15,7 @@ def home(res, req): delay = req.get_query("delay") delay = 0 if delay == None else float(delay) # tell response to run this in the event loop - # abort handler is grabed here, so responses only will be send if res.aborted == False + # abort handler is grabbed here, so responses only will be send if res.aborted == False res.run_async(delayed_hello(delay, res)) diff --git a/examples/file_stream.py b/examples/file_stream.py index 36d6f06..da3a5a1 100644 --- a/examples/file_stream.py +++ b/examples/file_stream.py @@ -43,7 +43,7 @@ async def home(res, req): # check if modified since is provided if if_modified_since == last_modified: return res.write_status(304).end_without_body() - # tells the broswer the last modified date + # tells the browser the last modified date res.write_header(b"Last-Modified", last_modified) # add content type diff --git a/examples/graphiql_raw.py b/examples/graphiql_raw.py index 768bd8e..3347dd4 100644 --- a/examples/graphiql_raw.py +++ b/examples/graphiql_raw.py @@ -25,7 +25,7 @@ async def graphiql_post(res, req): # we can pass whatever we want to context, query, headers or params, cookies etc context_value = req.preserve() - # get all incomming data and parses as json + # get all incoming data and parses as json body = await res.get_json() query = body["query"] diff --git a/examples/helpers/graphiql.py b/examples/helpers/graphiql.py index 1104050..453a507 100644 --- a/examples/helpers/graphiql.py +++ b/examples/helpers/graphiql.py @@ -12,7 +12,7 @@ def graphiql_from(Query, Mutation=None): # we can pass whatever we want to context, query, headers or params, cookies etc context_value = req.preserve() - # get all incomming data and parses as json + # get all incoming data and parses as json body = await res.get_json() query = body["query"] diff --git a/examples/helpers/static_aiofile.py b/examples/helpers/static_aiofile.py index 4038696..85da822 100644 --- a/examples/helpers/static_aiofile.py +++ b/examples/helpers/static_aiofile.py @@ -5,7 +5,7 @@ import mimetypes from os import path mimetypes.init() -# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files +# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files async def sendfile(res, req, filename): # read headers before the first await if_modified_since = req.get_header("if-modified-since") @@ -36,7 +36,7 @@ async def sendfile(res, req, filename): # check if modified since is provided if if_modified_since == last_modified: return res.write_status(304).end_without_body() - # tells the broswer the last modified date + # tells the browser the last modified date res.write_header(b"Last-Modified", last_modified) # add content type diff --git a/examples/helpers/static_aiofiles.py b/examples/helpers/static_aiofiles.py index ea75d15..975bf7a 100644 --- a/examples/helpers/static_aiofiles.py +++ b/examples/helpers/static_aiofiles.py @@ -5,7 +5,7 @@ import mimetypes from os import path mimetypes.init() -# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files +# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files async def sendfile(res, req, filename): # read headers before the first await if_modified_since = req.get_header("if-modified-since") @@ -36,7 +36,7 @@ async def sendfile(res, req, filename): # check if modified since is provided if if_modified_since == last_modified: return res.write_status(304).end_without_body() - # tells the broswer the last modified date + # tells the browser the last modified date res.write_header(b"Last-Modified", last_modified) # add content type diff --git a/examples/helpers/twolevel_cache.py b/examples/helpers/twolevel_cache.py index da89007..21819e1 100644 --- a/examples/helpers/twolevel_cache.py +++ b/examples/helpers/twolevel_cache.py @@ -1,13 +1,13 @@ import asyncio from .memory_cache import MemoryCache -# 2 LEVEL CACHE (Redis to share amoung worker, Memory to be much faster) +# 2 LEVEL CACHE (Redis to share among worker, Memory to be much faster) class TwoLevelCache: def __init__( - self, redis_conection, memory_expiration_time=3, redis_expiration_time=10 + self, redis_connection, memory_expiration_time=3, redis_expiration_time=10 ): self.memory_cache = MemoryCache() - self.redis_conection = redis_conection + self.redis_connection = redis_connection self.memory_expiration_time = memory_expiration_time self.redis_expiration_time = redis_expiration_time @@ -17,7 +17,7 @@ class TwoLevelCache: # never cache invalid data if data == None: return False - self.redis_conection.setex(key, self.redis_expiration_time, data) + self.redis_connection.setex(key, self.redis_expiration_time, data) self.memory_cache.setex(key, self.memory_expiration_time, data) return True except Exception as err: @@ -30,7 +30,7 @@ class TwoLevelCache: if value != None: return value # no memory cache so, got to redis - value = self.redis_conection.get(key) + value = self.redis_connection.get(key) if value != None: # refresh memory cache to speed up self.memory_cache.setex(key, self.memory_expiration_time, data) @@ -42,7 +42,7 @@ class TwoLevelCache: async def run_once(self, key, timeout, executor, *args): result = None try: - lock = self.redis_conection.lock(f"lock-{key}", blocking_timeout=timeout) + lock = self.redis_connection.lock(f"lock-{key}", blocking_timeout=timeout) # wait lock (some request is yeat not finish) while lock.locked(): await asyncio.sleep(0) diff --git a/examples/http_request_cache.py b/examples/http_request_cache.py index 050a1dc..62b43cc 100644 --- a/examples/http_request_cache.py +++ b/examples/http_request_cache.py @@ -6,10 +6,10 @@ from helpers.twolevel_cache import TwoLevelCache # create redis poll + connections redis_pool = redis.ConnectionPool(host="localhost", port=6379, db=0) -redis_conection = redis.Redis(connection_pool=redis_pool) -# 2 LEVEL CACHE (Redis to share amoung workers, Memory to be much faster) +redis_connection = redis.Redis(connection_pool=redis_pool) +# 2 LEVEL CACHE (Redis to share among workers, Memory to be much faster) # cache in memory is 30s, cache in redis is 60s duration -cache = TwoLevelCache(redis_conection, 30, 60) +cache = TwoLevelCache(redis_connection, 30, 60) ### # Model diff --git a/examples/requeriments.txt b/examples/requirements.txt similarity index 100% rename from examples/requeriments.txt rename to examples/requirements.txt diff --git a/examples/router_and_basics.py b/examples/router_and_basics.py index 90ebc19..19b462e 100644 --- a/examples/router_and_basics.py +++ b/examples/router_and_basics.py @@ -14,7 +14,7 @@ def anything(res, req): def cookies(res, req): - # cookies are writen after end + # cookies are written after end res.set_cookie( "spaciari", "1234567890", @@ -72,7 +72,7 @@ def delayed(res, req): # queries = req.get_queries() # tell response to run this in the event loop - # abort handler is grabed here, so responses only will be send if res.aborted == False + # abort handler is grabbed here, so responses only will be send if res.aborted == False res.run_async(delayed_hello(delay, res)) diff --git a/examples/static_files.py b/examples/static_files.py index 11ad95a..65ba66a 100644 --- a/examples/static_files.py +++ b/examples/static_files.py @@ -1,6 +1,6 @@ # We have an version of this using aiofile and aiofiles # This is an sync version without any dependencies is normally much faster in CPython and PyPy3 -# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files (in any language/framework) +# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files (in any language/framework) # Some performance data from my personal machine (Debian 12/testing, i7-7700HQ, 32GB RAM, Samsung 970 PRO NVME) # using oha -c 400 -z 5s http://localhost:3000/ @@ -20,7 +20,7 @@ # pypy3 - scarlette static uvicorn - 279.45 req/s # Conclusions: -# With PyPy3 only static is really usable gunicorn/uvicorn, aiofiles and aiofile are realy slow on PyPy3 maybe this changes with HPy +# With PyPy3 only static is really usable gunicorn/uvicorn, aiofiles and aiofile are really slow on PyPy3 maybe this changes with HPy # Python3 with any option will be faster than gunicorn/uvicorn but with PyPy3 with static we got 2x (or almost this in case of fastify) performance of node.js # But even PyPy3 + socketify static is 7x+ slower than NGINX diff --git a/examples/upload_or_post.py b/examples/upload_or_post.py index b168b84..8b391b2 100644 --- a/examples/upload_or_post.py +++ b/examples/upload_or_post.py @@ -1,7 +1,7 @@ from socketify import App ### -# We always recomend check res.aborted in async operations +# We always recommend check res.aborted in async operations ### diff --git a/pyproject.toml b/pyproject.toml index f3ef049..d98ae29 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -8,7 +8,7 @@ version = "0.0.1" authors = [ { name="Ciro Spaciari", email="ciro.spaciari@gmail.com" }, ] -description = "Bringing WebSockets, Http/Https High Peformance servers for PyPy3 and Python3" +description = "Bringing WebSockets, Http/Https High Performance servers for PyPy3 and Python3" readme = "README.md" requires-python = ">=3.7" classifiers = [ diff --git a/setup.py b/setup.py index 27cf1f3..c64eb7a 100644 --- a/setup.py +++ b/setup.py @@ -62,7 +62,7 @@ setuptools.setup( platforms=["any"], author="Ciro Spaciari", author_email="ciro.spaciari@gmail.com", - description="Bringing WebSockets, Http/Https High Peformance servers for PyPy3 and Python3", + description="Bringing WebSockets, Http/Https High Performance servers for PyPy3 and Python3", long_description=long_description, long_description_content_type="text/markdown", url="https://github.com/cirospaciari/socketify.py", diff --git a/src/socketify/helpers.py b/src/socketify/helpers.py index a68ae4a..9d82a42 100644 --- a/src/socketify/helpers.py +++ b/src/socketify/helpers.py @@ -8,7 +8,7 @@ mimetypes.init() # We have an version of this using aiofile and aiofiles # This is an sync version without any dependencies is normally much faster in CPython and PyPy3 -# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files +# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files async def sendfile(res, req, filename): # read headers before the first await if_modified_since = req.get_header("if-modified-since") @@ -39,7 +39,7 @@ async def sendfile(res, req, filename): # check if modified since is provided if if_modified_since == last_modified: return res.write_status(304).end_without_body() - # tells the broswer the last modified date + # tells the browser the last modified date res.write_header(b"Last-Modified", last_modified) # add content type diff --git a/src/socketify/socketify.py b/src/socketify/socketify.py index d2ffece..85fcb70 100644 --- a/src/socketify/socketify.py +++ b/src/socketify/socketify.py @@ -1014,7 +1014,7 @@ class RequestResponseFactory: res._aborted_handler = None res._writable_handler = None res._data_handler = None - res._grabed_abort_handler_once = False + res._grabbed_abort_handler_once = False res._write_jar = None res._cork_handler = None res._lastChunkOffset = 0 @@ -1266,7 +1266,7 @@ class AppResponse: self._writable_handler = None self._data_handler = None self._ptr = ffi.new_handle(self) - self._grabed_abort_handler_once = False + self._grabbed_abort_handler_once = False self._write_jar = None self._cork_handler = None self._lastChunkOffset = 0 @@ -1443,8 +1443,8 @@ class AppResponse: def grab_aborted_handler(self): # only needed if is async - if not self.aborted and not self._grabed_abort_handler_once: - self._grabed_abort_handler_once = True + if not self.aborted and not self._grabbed_abort_handler_once: + self._grabbed_abort_handler_once = True lib.uws_res_on_aborted( self.SSL, self.res, uws_generic_aborted_handler, self._ptr ) @@ -1744,7 +1744,7 @@ class AppResponse: class App: - def __init__(self, options=None, request_response_factory_max_itens=0, websocket_factory_max_itens=0): + def __init__(self, options=None, request_response_factory_max_items=0, websocket_factory_max_items=0): socket_options_ptr = ffi.new("struct us_socket_context_options_t *") socket_options = socket_options_ptr[0] self.options = options @@ -1811,13 +1811,13 @@ class App: self.error_handler = None self._missing_server_handler = None - if request_response_factory_max_itens and request_response_factory_max_itens >= 1: - self._factory = RequestResponseFactory(self, request_response_factory_max_itens) + if request_response_factory_max_items and request_response_factory_max_items >= 1: + self._factory = RequestResponseFactory(self, request_response_factory_max_items) else: self._factory = None - if websocket_factory_max_itens and websocket_factory_max_itens >= 1: - self._ws_factory = WebSocketFactory(self, websocket_factory_max_itens) + if websocket_factory_max_items and websocket_factory_max_items >= 1: + self._ws_factory = WebSocketFactory(self, websocket_factory_max_items) else: self._ws_factory = None diff --git a/src/socketify/ssgi.py b/src/socketify/ssgi.py index 128a55c..693d0b8 100644 --- a/src/socketify/ssgi.py +++ b/src/socketify/ssgi.py @@ -23,7 +23,7 @@ class SSGIHttpResponse: self.res.end(payload) # send chunk of data, can be used to perform with less backpressure than using send - # total_size is the sum of all lenghts in bytes of all chunks to be sended + # total_size is the sum of all lengths in bytes of all chunks to be sended # connection will end when total_size is met # returns tuple(bool, bool) first bool represents if the chunk is succefully sended, the second if the connection has ended def send_chunk(self, chunk: Union[bytes, bytearray, memoryview], total_size: int) -> Awaitable: @@ -86,7 +86,7 @@ class SSGIHttpResponse: self._received_queue.put(future, False) return future - # on aborted event, calle when the connection abort + # on aborted event, called when the connection abort def on_aborted(self, handler: Union[Awaitable, Callable], *arguments): def on_aborted(res): res.aborted = True