Found via `codespell -S .mypy_cache -L falsy`
pull/39/head
Kian-Meng Ang 2022-11-28 13:44:38 +08:00
rodzic ee5ced290e
commit fb5f9e683f
25 zmienionych plików z 52 dodań i 52 usunięć

Wyświetl plik

@ -21,7 +21,7 @@
## 💡 Features
- WebSocket with pub/sub support
- Fast and realiable Http/Https
- Fast and reliable Http/Https
- Support for Windows, Linux and macOS Silicon & x64
- Support for [`PyPy3`](https://www.pypy.org/) and [`CPython`](https://github.com/python/cpython)
- Dynamic URL Routing with Wildcard & Parameter support
@ -68,7 +68,7 @@ Socketify got almost 900k messages/s with PyPy3 and 860k with Python3 the same p
Runtime versions: PyPy3 7.3.9, Python 3.10.7, node v16.17.0, bun v0.2.2<br/>
Framework versions: gunicorn 20.1.0 + uvicorn 0.19.0, socketify alpha, gunicorn 20.1.0 + falcon 3.1.0, robyn 0.18.3<br/>
Http tested with oha -c 40 -z 5s http://localhost:8000/ (1 run for warmup and 3 runs average for testing)<br/>
Http tested with oha -c 40 -z 5s http://localhost:8000/ (1 run for warm-up and 3 runs average for testing)<br/>
WebSocket tested with [Bun.sh](https://bun.sh) bench chat-client <br/>
Source code in [bench](https://github.com/cirospaciari/socketify.py/tree/main/bench)<br/>
@ -205,7 +205,7 @@ And yes, we can be faster than japronto when all our features and goals are achi
We don't use uvloop, because uvloop don't support Windows and PyPy3 at this moment, this can change in the future, but right now we want to implement our own libuv + asyncio solution, and a lot more.
## :dizzy: CFFI vs Cython vs HPy
Cython performs really well on Python3 but really bad on PyPy3, CFFI are choosen for better support PyPy3 until we got our hands on an stable [`HPy`](https://hpyproject.org/) integration.
Cython performs really well on Python3 but really bad on PyPy3, CFFI are chosen for better support PyPy3 until we got our hands on an stable [`HPy`](https://hpyproject.org/) integration.
## :bookmark_tabs: Documentation
See the full docs in [docs.socketify.dev](https://docs.socketify.dev) or in [/docs/README.md](docs/README.md)

Wyświetl plik

@ -13,7 +13,7 @@ class SSGIHttpResponse:
pass
# send chunk of data, can be used to perform with less backpressure than using send
# total_size is the sum of all lenghts in bytes of all chunks to be sended
# total_size is the sum of all lengths in bytes of all chunks to be sended
# connection will end when total_size is met
# returns tuple(bool, bool) first bool represents if the chunk is succefully sended, the second if the connection has ended
def send_chunk(self, chunk: Union[str, bytes, bytearray, memoryview], total_size: int = False) -> Awaitable:
@ -32,7 +32,7 @@ class SSGIHttpResponse:
pass
# get an all data
# returns an BytesIO() or None if no payload is availabl
# returns an BytesIO() or None if no payload is available
def get_data(self) -> Awaitable:
pass
@ -41,7 +41,7 @@ class SSGIHttpResponse:
def get_chunk(self) -> Awaitable:
pass
# on aborted event, calle when the connection abort
# on aborted event, called when the connection abort
def on_aborted(self, handler: Union[Awaitable, Callable], *arguments):
pass

Wyświetl plik

@ -22,7 +22,7 @@ if __name__ == "__main__":
# Serve until process is killed
httpd.serve_forever()
# pypy3 -m gunicorn falcon_plaintext:app -w 4 --worker-class=gevent #recomended for pypy3
# pypy3 -m gunicorn falcon_plaintext:app -w 4 --worker-class=gevent #recommended for pypy3
# python3 -m gunicorn falcon_plaintext:app -w 4 #without Cython
# pypy3 -m gunicorn falcon_plaintext:app -w 4 #without gevent
# python3 -m gunicorn falcon_plaintext:app -w 4 --worker-class="egg:meinheld#gunicorn_worker" #with Cython

Wyświetl plik

@ -110,7 +110,7 @@ We also have an `req.get_cookie(cookie_name)` to get a cookie value as String an
```python
def cookies(res, req):
# cookies are writen after end
# cookies are written after end
res.set_cookie(
"session_id",
"1234567890",
@ -192,6 +192,6 @@ If you need to access the raw pointer of `libuv` you can use `app.get_native_han
## Preserve data for use after await
HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await.
If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality.
If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penalty.
### Next [Upload and Post](upload-post.md)

Wyświetl plik

@ -10,7 +10,7 @@ If you have callbacks registered to some other library, say libhiredis, those ca
Only one single socket can be corked at any point in time (isolated per thread, of course). It is efficient to cork-and-uncork.
Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await.
If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality.
If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penalty.
Corking is important even for calls which seem to be "atomic" and only send one chunk. res.end, res.try_end, res.write_status, res.write_header will most likely send multiple chunks of data and is very important to properly cork.
@ -36,6 +36,6 @@ async def home(res, req):
```
> You cannot use async inside cork but, you can cork only when you need to send the response after all the async happens
For convinience we have `res.cork_end()`, `ws.cork_send()` that will cork and call end for you, and also `res.render()` that will always response using `res.cork_end()` to send your HTML / Data
For convenience we have `res.cork_end()`, `ws.cork_send()` that will cork and call end for you, and also `res.render()` that will always response using `res.cork_end()` to send your HTML / Data
### Next [Routes](routes.md)

Wyświetl plik

@ -54,7 +54,7 @@ def graphiql_from(Query, Mutation=None):
# we can pass whatever we want to context, query, headers or params, cookies etc
context_value = req.preserve()
# get all incomming data and parses as json
# get all incoming data and parses as json
body = await res.get_json()
query = body["query"]

Wyświetl plik

@ -22,7 +22,7 @@ async def home(res, req):
app.post("/", home)
```
> Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await. If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality. Take a look in [Corking](corking.md) for get a more in deph information
> Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await. If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penalty. Take a look in [Corking](corking.md) for get a more in deph information
## Pattern matching
Routes are matched in order of specificity, not by the order you register them:
@ -58,7 +58,7 @@ app.any("/*", not_found)
```
## Error handler
In case of some uncaught exceptions we will always try our best to call the error handler, you can set the handler using `app.set_error_handler(hanlder)`
In case of some uncaught exceptions we will always try our best to call the error handler, you can set the handler using `app.set_error_handler(handler)`
```python

Wyświetl plik

@ -1,3 +1,3 @@
Support is already there, docs Comming soon...
Support is already there, docs Coming soon...
### Next [API Reference](api.md)

Wyświetl plik

@ -15,7 +15,7 @@ def home(res, req):
delay = req.get_query("delay")
delay = 0 if delay == None else float(delay)
# tell response to run this in the event loop
# abort handler is grabed here, so responses only will be send if res.aborted == False
# abort handler is grabbed here, so responses only will be send if res.aborted == False
res.run_async(delayed_hello(delay, res))

Wyświetl plik

@ -43,7 +43,7 @@ async def home(res, req):
# check if modified since is provided
if if_modified_since == last_modified:
return res.write_status(304).end_without_body()
# tells the broswer the last modified date
# tells the browser the last modified date
res.write_header(b"Last-Modified", last_modified)
# add content type

Wyświetl plik

@ -25,7 +25,7 @@ async def graphiql_post(res, req):
# we can pass whatever we want to context, query, headers or params, cookies etc
context_value = req.preserve()
# get all incomming data and parses as json
# get all incoming data and parses as json
body = await res.get_json()
query = body["query"]

Wyświetl plik

@ -12,7 +12,7 @@ def graphiql_from(Query, Mutation=None):
# we can pass whatever we want to context, query, headers or params, cookies etc
context_value = req.preserve()
# get all incomming data and parses as json
# get all incoming data and parses as json
body = await res.get_json()
query = body["query"]

Wyświetl plik

@ -5,7 +5,7 @@ import mimetypes
from os import path
mimetypes.init()
# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files
# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files
async def sendfile(res, req, filename):
# read headers before the first await
if_modified_since = req.get_header("if-modified-since")
@ -36,7 +36,7 @@ async def sendfile(res, req, filename):
# check if modified since is provided
if if_modified_since == last_modified:
return res.write_status(304).end_without_body()
# tells the broswer the last modified date
# tells the browser the last modified date
res.write_header(b"Last-Modified", last_modified)
# add content type

Wyświetl plik

@ -5,7 +5,7 @@ import mimetypes
from os import path
mimetypes.init()
# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files
# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files
async def sendfile(res, req, filename):
# read headers before the first await
if_modified_since = req.get_header("if-modified-since")
@ -36,7 +36,7 @@ async def sendfile(res, req, filename):
# check if modified since is provided
if if_modified_since == last_modified:
return res.write_status(304).end_without_body()
# tells the broswer the last modified date
# tells the browser the last modified date
res.write_header(b"Last-Modified", last_modified)
# add content type

Wyświetl plik

@ -1,13 +1,13 @@
import asyncio
from .memory_cache import MemoryCache
# 2 LEVEL CACHE (Redis to share amoung worker, Memory to be much faster)
# 2 LEVEL CACHE (Redis to share among worker, Memory to be much faster)
class TwoLevelCache:
def __init__(
self, redis_conection, memory_expiration_time=3, redis_expiration_time=10
self, redis_connection, memory_expiration_time=3, redis_expiration_time=10
):
self.memory_cache = MemoryCache()
self.redis_conection = redis_conection
self.redis_connection = redis_connection
self.memory_expiration_time = memory_expiration_time
self.redis_expiration_time = redis_expiration_time
@ -17,7 +17,7 @@ class TwoLevelCache:
# never cache invalid data
if data == None:
return False
self.redis_conection.setex(key, self.redis_expiration_time, data)
self.redis_connection.setex(key, self.redis_expiration_time, data)
self.memory_cache.setex(key, self.memory_expiration_time, data)
return True
except Exception as err:
@ -30,7 +30,7 @@ class TwoLevelCache:
if value != None:
return value
# no memory cache so, got to redis
value = self.redis_conection.get(key)
value = self.redis_connection.get(key)
if value != None:
# refresh memory cache to speed up
self.memory_cache.setex(key, self.memory_expiration_time, data)
@ -42,7 +42,7 @@ class TwoLevelCache:
async def run_once(self, key, timeout, executor, *args):
result = None
try:
lock = self.redis_conection.lock(f"lock-{key}", blocking_timeout=timeout)
lock = self.redis_connection.lock(f"lock-{key}", blocking_timeout=timeout)
# wait lock (some request is yeat not finish)
while lock.locked():
await asyncio.sleep(0)

Wyświetl plik

@ -6,10 +6,10 @@ from helpers.twolevel_cache import TwoLevelCache
# create redis poll + connections
redis_pool = redis.ConnectionPool(host="localhost", port=6379, db=0)
redis_conection = redis.Redis(connection_pool=redis_pool)
# 2 LEVEL CACHE (Redis to share amoung workers, Memory to be much faster)
redis_connection = redis.Redis(connection_pool=redis_pool)
# 2 LEVEL CACHE (Redis to share among workers, Memory to be much faster)
# cache in memory is 30s, cache in redis is 60s duration
cache = TwoLevelCache(redis_conection, 30, 60)
cache = TwoLevelCache(redis_connection, 30, 60)
###
# Model

Wyświetl plik

@ -14,7 +14,7 @@ def anything(res, req):
def cookies(res, req):
# cookies are writen after end
# cookies are written after end
res.set_cookie(
"spaciari",
"1234567890",
@ -72,7 +72,7 @@ def delayed(res, req):
# queries = req.get_queries()
# tell response to run this in the event loop
# abort handler is grabed here, so responses only will be send if res.aborted == False
# abort handler is grabbed here, so responses only will be send if res.aborted == False
res.run_async(delayed_hello(delay, res))

Wyświetl plik

@ -1,6 +1,6 @@
# We have an version of this using aiofile and aiofiles
# This is an sync version without any dependencies is normally much faster in CPython and PyPy3
# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files (in any language/framework)
# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files (in any language/framework)
# Some performance data from my personal machine (Debian 12/testing, i7-7700HQ, 32GB RAM, Samsung 970 PRO NVME)
# using oha -c 400 -z 5s http://localhost:3000/
@ -20,7 +20,7 @@
# pypy3 - scarlette static uvicorn - 279.45 req/s
# Conclusions:
# With PyPy3 only static is really usable gunicorn/uvicorn, aiofiles and aiofile are realy slow on PyPy3 maybe this changes with HPy
# With PyPy3 only static is really usable gunicorn/uvicorn, aiofiles and aiofile are really slow on PyPy3 maybe this changes with HPy
# Python3 with any option will be faster than gunicorn/uvicorn but with PyPy3 with static we got 2x (or almost this in case of fastify) performance of node.js
# But even PyPy3 + socketify static is 7x+ slower than NGINX

Wyświetl plik

@ -1,7 +1,7 @@
from socketify import App
###
# We always recomend check res.aborted in async operations
# We always recommend check res.aborted in async operations
###

Wyświetl plik

@ -8,7 +8,7 @@ version = "0.0.1"
authors = [
{ name="Ciro Spaciari", email="ciro.spaciari@gmail.com" },
]
description = "Bringing WebSockets, Http/Https High Peformance servers for PyPy3 and Python3"
description = "Bringing WebSockets, Http/Https High Performance servers for PyPy3 and Python3"
readme = "README.md"
requires-python = ">=3.7"
classifiers = [

Wyświetl plik

@ -62,7 +62,7 @@ setuptools.setup(
platforms=["any"],
author="Ciro Spaciari",
author_email="ciro.spaciari@gmail.com",
description="Bringing WebSockets, Http/Https High Peformance servers for PyPy3 and Python3",
description="Bringing WebSockets, Http/Https High Performance servers for PyPy3 and Python3",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/cirospaciari/socketify.py",

Wyświetl plik

@ -8,7 +8,7 @@ mimetypes.init()
# We have an version of this using aiofile and aiofiles
# This is an sync version without any dependencies is normally much faster in CPython and PyPy3
# In production we highly recomend to use CDN like CloudFlare or/and NGINX or similar for static files
# In production we highly recommend to use CDN like CloudFlare or/and NGINX or similar for static files
async def sendfile(res, req, filename):
# read headers before the first await
if_modified_since = req.get_header("if-modified-since")
@ -39,7 +39,7 @@ async def sendfile(res, req, filename):
# check if modified since is provided
if if_modified_since == last_modified:
return res.write_status(304).end_without_body()
# tells the broswer the last modified date
# tells the browser the last modified date
res.write_header(b"Last-Modified", last_modified)
# add content type

Wyświetl plik

@ -1014,7 +1014,7 @@ class RequestResponseFactory:
res._aborted_handler = None
res._writable_handler = None
res._data_handler = None
res._grabed_abort_handler_once = False
res._grabbed_abort_handler_once = False
res._write_jar = None
res._cork_handler = None
res._lastChunkOffset = 0
@ -1266,7 +1266,7 @@ class AppResponse:
self._writable_handler = None
self._data_handler = None
self._ptr = ffi.new_handle(self)
self._grabed_abort_handler_once = False
self._grabbed_abort_handler_once = False
self._write_jar = None
self._cork_handler = None
self._lastChunkOffset = 0
@ -1443,8 +1443,8 @@ class AppResponse:
def grab_aborted_handler(self):
# only needed if is async
if not self.aborted and not self._grabed_abort_handler_once:
self._grabed_abort_handler_once = True
if not self.aborted and not self._grabbed_abort_handler_once:
self._grabbed_abort_handler_once = True
lib.uws_res_on_aborted(
self.SSL, self.res, uws_generic_aborted_handler, self._ptr
)
@ -1744,7 +1744,7 @@ class AppResponse:
class App:
def __init__(self, options=None, request_response_factory_max_itens=0, websocket_factory_max_itens=0):
def __init__(self, options=None, request_response_factory_max_items=0, websocket_factory_max_items=0):
socket_options_ptr = ffi.new("struct us_socket_context_options_t *")
socket_options = socket_options_ptr[0]
self.options = options
@ -1811,13 +1811,13 @@ class App:
self.error_handler = None
self._missing_server_handler = None
if request_response_factory_max_itens and request_response_factory_max_itens >= 1:
self._factory = RequestResponseFactory(self, request_response_factory_max_itens)
if request_response_factory_max_items and request_response_factory_max_items >= 1:
self._factory = RequestResponseFactory(self, request_response_factory_max_items)
else:
self._factory = None
if websocket_factory_max_itens and websocket_factory_max_itens >= 1:
self._ws_factory = WebSocketFactory(self, websocket_factory_max_itens)
if websocket_factory_max_items and websocket_factory_max_items >= 1:
self._ws_factory = WebSocketFactory(self, websocket_factory_max_items)
else:
self._ws_factory = None

Wyświetl plik

@ -23,7 +23,7 @@ class SSGIHttpResponse:
self.res.end(payload)
# send chunk of data, can be used to perform with less backpressure than using send
# total_size is the sum of all lenghts in bytes of all chunks to be sended
# total_size is the sum of all lengths in bytes of all chunks to be sended
# connection will end when total_size is met
# returns tuple(bool, bool) first bool represents if the chunk is succefully sended, the second if the connection has ended
def send_chunk(self, chunk: Union[bytes, bytearray, memoryview], total_size: int) -> Awaitable:
@ -86,7 +86,7 @@ class SSGIHttpResponse:
self._received_queue.put(future, False)
return future
# on aborted event, calle when the connection abort
# on aborted event, called when the connection abort
def on_aborted(self, handler: Union[Awaitable, Callable], *arguments):
def on_aborted(res):
res.aborted = True