initial docs, still incomplete but is a lot

pull/39/head
Ciro 2022-11-17 18:36:28 -03:00
rodzic a41b45d677
commit cd65043250
16 zmienionych plików z 976 dodań i 0 usunięć

33
docs/README.md 100644
Wyświetl plik

@ -0,0 +1,33 @@
# socketify.py
<p align="center">
<a href="https://github.com/cirospaciari/socketify.py"><img src="https://raw.githubusercontent.com/cirospaciari/socketify.py/main/misc/logo.png" alt="Logo" height=170></a>
<br />
<br />
<a href="https://github.com/cirospaciari/socketify.py/actions/workflows/macos.yml" target="_blank"><img src="https://github.com/cirospaciari/socketify.py/actions/workflows/macos.yml/badge.svg" /></a>
<a href="https://github.com/cirospaciari/socketify.py/actions/workflows/linux.yml" target="_blank"><img src="https://github.com/cirospaciari/socketify.py/actions/workflows/linux.yml/badge.svg" /></a>
<a href="https://github.com/cirospaciari/socketify.py/actions/workflows/windows.yml" target="_blank"><img src="https://github.com/cirospaciari/socketify.py/actions/workflows/windows.yml/badge.svg" /></a>
<a href="https://github.com/sponsors/cirospaciari/" target="_blank"><img src="https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&link=https://github.com/sponsors/cirospaciari"/></a>
</p>
<br/>
Socketify.py is a reliable, high-performance Python web framework for building large-scale app backends and microservices.
With no precedents websocket performance and an really fast HTTP server that can delivery encrypted TLS 1.3 quicker than most alternative servers can do even unencrypted, cleartext messaging.
## Summary
- [Installation](installation.md)
- [Getting Started](getting-started.md)
- [Corking Concept](corking.md)
- [Routes](routes.md)
- [Middlewares](middlewares.md)
- [Basics](basics.md)
- [Upload and Post](upload-post.md)
- [Streaming Data](streaming-data.md)
- [Send File and Static Files](static-files.md)
- [Templates](templates.md)
- [GraphiQL](graphiQL.md)
- [WebSockets and Backpressure](websockets-backpressure.md)
- [SSL](ssl.md)
- [API Reference](ssl.md)

16
docs/_sidebar.md 100644
Wyświetl plik

@ -0,0 +1,16 @@
<!-- docs/_sidebar.md -->
- [Installation](installation.md)
- [Getting Started](getting-started.md)
- [Corking Concept](corking.md)
- [Routes](routes.md)
- [Middlewares](middlewares.md)
- [Basics](basics.md)
- [Upload and Post](upload-post.md)
- [Streaming Data](streaming-data.md)
- [Send File and Static Files](static-files.md)
- [Templates](templates.md)
- [GraphiQL](graphiQL.md)
- [WebSockets and Backpressure](websockets-backpressure.md)
- [SSL](ssl.md)
- [API Reference](api.md)

1
docs/api.md 100644
Wyświetl plik

@ -0,0 +1 @@
docs Comming soon...

195
docs/basics.md 100644
Wyświetl plik

@ -0,0 +1,195 @@
## All Basic Stuff
This section is to show the basics of `AppResponse` and `AppRequest`
### Writing data
`res.write(message)` were message can be String, bytes or an Object that can be converted to json, send the message to the response without ending.
`res.cork_end(message, end_connection=False)` or `res.end(message, end_connection=False)` were message can be String, bytes or an Object that can be converted to json, send the message to the response and end the response.
The above `res.end()` or `res.cork_end()` call will actually call three separate send functions; res.write_status, res.write_header and whatever it does itself. By wrapping the call in `res.cork` or `res.cork_end` you make sure these three send functions are efficient and only result in one single send syscall and one single SSL block if using SSL.
Using `res.write_continue()` writes HTTP/1.1 100 Continue as response
`res.write_offset(offset)` sets the offset for writing data
`res.get_write_offset()` gets the current write offset
`res.pause()` and `res.resume()` pause and resume the response
```python
def send_in_parts(res, req):
# write and end accepts bytes and str or its try to dumps to an json
res.write("I can")
res.write(" send ")
res.write("messages")
res.end(" in parts!")
```
### Ending without body
```python
def empty(res, req):
res.end_without_body()
```
## Check if already responded
`res.has_responded()` returns True if the response is already done.
### Redirecting
```python
def redirect(res, req):
# status code is optional default is 302
res.redirect("/redirected", 302)
```
### Writing Status
```python
def not_found(res, req):
res.write_status(404).end("Not Found")
def ok(res, req):
res.write_status("200 OK").end("Not Found")
```
### Check the URL or Method
`req.get_full_url()` will return the path with query string
`req.get_url()` will return the path without query string
`req.get_method()` will return the HTTP Method (is case sensitive)
### Parameters
You can use `req.get_parameters(index)` to get the parametervalue as String or use `req.get_parameters()` to get an list of parameters
```python
def user(res, req):
if int(req.get_parameter(0)) == 1:
return res.end("Hello user with id 1!")
params = req.get_parameters()
print('All params', params)
app.get("/user/:id", user)
```
### Headers
You can use `req.get_header(lowercase_header_name)` to get the header string value as String or use `req.get_headers()` to get as a dict, `req.for_each_header()` if you just want to iterate in the headers.
You can also set the header using `res.write_header(name, value)`.
```python
def home(res, req):
auth = req.get_header("authorization")
headers = req.get_headers()
print("All headers", headers)
def custom_header(res, req):
res.write_header("Content-Type", "application/octet-stream")
res.write_header("Content-Disposition", 'attachment; filename="message.txt"')
res.end("Downloaded this ;)")
def list_headers(res, req):
req.for_each_header(lambda key, value: print("Header %s: %s" % (key, value)))
```
### Query String
You can use `req.get_query(parameter_name)` to get the query string value as String or use `req.get_queries()` to get as a dict.
```python
def home(res, req):
search = req.get_query("search")
queries = req.get_queries()
print("All queries", queries)
```
### Cookies
We also have an `req.get_cookie(cookie_name)` to get a cookie value as String and `res.set_cookie(name, value, options=None)` to set a cookie.
```python
def cookies(res, req):
# cookies are writen after end
res.set_cookie(
"session_id",
"1234567890",
{
# expires
# path
# comment
# domain
# max-age
# secure
# version
# httponly
# samesite
"path": "/",
# "domain": "*.test.com",
"httponly": True,
"samesite": "None",
"secure": True,
"expires": datetime.utcnow() + timedelta(minutes=30),
},
)
res.end("Your session_id cookie is: %s" % req.get_cookie("session_id"))
```
## Getting remote address
You can get the remote address by using get_remote_address_bytes, get_remote_address and the proxied address using get_proxied_remote_address_bytes or get_proxied_remote_address
```python
def home(res, req):
res.write("<html><h1>")
res.write("Your proxied IP is: %s" % res.get_proxied_remote_address())
res.write("</h1><h1>")
res.write("Your IP as seen by the origin server is: %s" % res.get_remote_address())
res.end("</h1></html>")
```
> The difference between the _bytes() version an non bytes is that one returns an String an the other the raw bytes
## App Pub/Sub
`app.num_subscribers(topic)` will return the number of subscribers at the topic.
`app.publish(topic, message, opcode=OpCode.BINARY, compress=False)` will send a message for everyone subscribed in the topic.
## Check if is aborted
If the connection aborted you can check `res.aborted` that will return True or False. You can also grab the abort handler, when using an async route, socketify will always auto grab the abort handler
```python
def home(res, req):
def on_abort(res):
res.aborted = True
print("aborted!")
res.on_aborted(on_abort)
```
## Running async from sync route
If you wanna to optimize a lot and don't use async without need you can use `res.run_async() or app.run_async()` to execute an coroutine
```python
from socketify import App, sendfile
def route_handler(res, req):
if in_memory_text:
res.end(in_memory_text)
else:
# grab the abort handler adding it to res.aborted if aborted
res.grab_aborted_handler()
res.run_async(sendfile(res, req, "my_text"))
```
## Raw socket pointer
If for some reason you need the raw socket pointer you can use `res.get_native_handle()` and will get an CFFI handler.
## Raw event loop pointer
If you need to access the raw pointer of `libuv` you can use `app.get_native_handle()` and will get an CFFI handler.
## Preserve data for use after await
HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await.
If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality.

39
docs/corking.md 100644
Wyświetl plik

@ -0,0 +1,39 @@
## Corking
It is very important to understand the corking mechanism, as that is responsible for efficiently formatting, packing and sending data. Without corking your app will still work reliably, but can perform very bad and use excessive networking. In some cases the performance can be dreadful without proper corking.
That's why your sockets will be corked by default in most simple cases, including all of the examples provided. However there are cases where default corking cannot happen automatically.
Whenever your registered business logic (your callbacks) are called from the library, such as when receiving a message or when a socket opens, you'll be corked by default. Whatever you do with the socket inside of that callback will be efficient and properly corked.
If you have callbacks registered to some other library, say libhiredis, those callbacks will not be called with corked sockets (how could we know when to cork the socket if we don't control the third-party library!?).
Only one single socket can be corked at any point in time (isolated per thread, of course). It is efficient to cork-and-uncork.
Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await.
If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality.
Corking is important even for calls which seem to be "atomic" and only send one chunk. res.end, res.try_end, res.write_status, res.write_header will most likely send multiple chunks of data and is very important to properly cork.
You can make sure corking is enabled, even for cases where default corking would be enabled, by wrapping whatever sending function calls in a lambda or function like so:
```python
async def home(res, req):
auth = req.get_header("authorization")
user = await do_auth(auth)
res.cork(lambda res: res.end(f"Hello {user.name}"))
```
```python
async def home(res, req):
auth = req.get_header("authorization")
user = await do_auth(auth)
def on_cork(res):
res.write_header("session_id", user.session_id)
res.end(f"Hello {user.name}")
res.cork(on_cork)
```
> You cannot use async inside cork but, you can cork only when you need to send the response after all the async happens
For convinience we have `res.cork_end()`, `ws.cork_send()` that will cork and call end for you, and also `res.render()` that will always response using `res.cork_end()` to send your HTML / Data

Wyświetl plik

@ -0,0 +1,60 @@
## Getting Started.
First we need to install following this [Installation Guide](installation.md).
Now that we have everything setup let's take a look in some quick examples.
Hello world app
```python
from socketify import App
app = App()
app.get("/", lambda res, req: res.end("Hello World socketify from Python!"))
app.listen(3000, lambda config: print("Listening on port http://localhost:%d now\n" % config.port))
app.run()
```
> This example just show how intuitive is to start an simple hello world app.
SSL version sample
``` python
from socketify import App, AppOptions
app = App(AppOptions(key_file_name="./misc/key.pem", cert_file_name="./misc/cert.pem", passphrase="1234"))
app.get("/", lambda res, req: res.end("Hello World socketify from Python!"))
app.listen(3000, lambda config: print("Listening on port http://localhost:%d now\n" % config.port))
app.run()
```
> We have a lot of SSL options, but this is the most common you can see all the options in the [API Reference](api.md)
WebSockets
```python
from socketify import App, AppOptions, OpCode, CompressOptions
def ws_open(ws):
print('A WebSocket got connected!')
ws.send("Hello World!", OpCode.TEXT)
def ws_message(ws, message, opcode):
#Ok is false if backpressure was built up, wait for drain
ok = ws.send(message, opcode)
app = App()
app.ws("/*", {
'compression': CompressOptions.SHARED_COMPRESSOR,
'max_payload_length': 16 * 1024 * 1024,
'idle_timeout': 12,
'open': ws_open,
'message': ws_message,
'drain': lambda ws: print('WebSocket backpressure: %i' % ws.get_buffered_amount()),
'close': lambda ws, code, message: print('WebSocket closed')
})
app.any("/", lambda res,req: res.end("Nothing to see here!'"))
app.listen(3000, lambda config: print("Listening on port http://localhost:%d now\n" % (config.port)))
app.run()
```
> We can have multiple routes for WebSockets, but in this example we just get one for anything we need, adding an option of compression using SHARED_COMPRESSOR, max_payload_length of 1mb and an idle timeout of 12s just to show some most commonly used features you can see all these options in the [API Reference](api.md)
If you just wanna to see some more examples you can go to our [examples folder](https://github.com/cirospaciari/socketify.py/tree/main/examples) for more than 25 quick examples.

83
docs/graphiql.md 100644
Wyświetl plik

@ -0,0 +1,83 @@
## GraphiQL Support
In /src/examples/helper/graphiql.py we implemented an helper for using graphiQL with strawberry.
### Usage
```python
import dataclasses
import strawberry
import strawberry.utils.graphiql
from socketify import App
from typing import List, Optional
from helpers.graphiql import graphiql_from
@strawberry.type
class User:
name: str
@strawberry.type
class Query:
@strawberry.field
def user(self) -> Optional[User]:
# self.context is the AppRequest
return User(name="Hello")
app = App()
app.get("/", lambda res, req: res.end(strawberry.utils.graphiql.get_graphiql_html()))
app.post("/", graphiql_from(Query))
# you can also pass an Mutation as second parameter
# app.post("/", graphiql_from(Query, Mutation))
app.listen(
3000,
lambda config: print("Listening on port http://localhost:%d now\n" % config.port),
)
app.run()
```
### Helper Implementation
```python
import strawberry
import strawberry.utils.graphiql
def graphiql_from(Query, Mutation=None):
if Mutation:
schema = strawberry.Schema(query=Query, mutation=Mutation)
else:
schema = strawberry.Schema(Query)
async def post(res, req):
# we can pass whatever we want to context, query, headers or params, cookies etc
context_value = req.preserve()
# get all incomming data and parses as json
body = await res.get_json()
query = body["query"]
variables = body.get("variables", None)
root_value = body.get("root_value", None)
operation_name = body.get("operation_name", None)
data = await schema.execute(
query,
variables,
context_value,
root_value,
operation_name,
)
res.cork_end(
{
"data": (data.data),
**({"errors": data.errors} if data.errors else {}),
**({"extensions": data.extensions} if data.extensions else {}),
}
)
return post
```

Wyświetl plik

@ -0,0 +1,33 @@
## 📦 Installation
For macOS x64 & Silicon, Linux x64, Windows
```bash
pip install git+https://github.com/cirospaciari/socketify.py.git
#or specify PyPy3
pypy3 -m pip install git+https://github.com/cirospaciari/socketify.py.git
#or in editable mode
pypy3 -m pip install -e git+https://github.com/cirospaciari/socketify.py.git@main#egg=socketify
```
Using install via requirements.txt
```text
git+https://github.com/cirospaciari/socketify.py.git@main#socketify
```
```bash
pip install -r ./requirements.txt
#or specify PyPy3
pypy3 -m pip install -r ./requirements.txt
```
If you are using linux or macOS, you may need to install libuv and zlib in your system
macOS
```bash
brew install libuv
brew install zlib
```
Linux
```bash
apt install libuv1 zlib1g
```

Wyświetl plik

@ -0,0 +1,67 @@
# Middlewares
We have support to middlewares using the `middleware` helper function and the `MiddlewareRouter`.
Like said in [routes](routes.md) and [corking](corking.md) section Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled. HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await.
In case of middlewares if one of them is an coroutine, we automatically call `req.preserve()` to preserve request data between middlewares.
Middlewares are executed in series, so if returned data is Falsy (False/None etc) will result in the end of the execution, if is not Falsy the data will pass to the next middleware like an third parameter.
```python
from socketify import App, MiddlewareRouter, middleware
async def get_user(authorization):
if authorization:
# you can do something async here
return {"greeting": "Hello, World"}
return None
async def auth(res, req, data=None):
user = await get_user(req.get_header("authorization"))
if not user:
res.write_status(403).end("not authorized")
# returning Falsy in middlewares just stop the execution of the next middleware
return False
# returns extra data
return user
def another_middie(res, req, data=None):
# now we can mix sync and async and change the data here
if isinstance(data, dict):
gretting = data.get("greeting", "")
data["greeting"] = f"{gretting} from another middie ;)"
return data
def home(res, req, user=None):
res.cork_end(user.get("greeting", None))
app = App()
#you can use middleware directly on the default router
app.get('/direct', middleware(auth, another_middie, home))
# you can use an Middleware router to add middlewares to every route you set
auth_router = MiddlewareRouter(app, auth)
auth_router.get("/", home)
# you can also mix middleware() with MiddlewareRouter
auth_router.get("/another", middleware(another_middie, home))
# you can also pass multiple middlewares on the MiddlewareRouter
other_router = MiddlewareRouter(app, auth, another_middie)
other_router.get("/another_way", home)
app.listen(
3000,
lambda config: print("Listening on port http://localhost:%d now\n" % config.port),
)
app.run()
```

137
docs/routes.md 100644
Wyświetl plik

@ -0,0 +1,137 @@
# App.get, post, options, delete, patch, put, head, connect, trace and any routes
You attach behavior to "URL routes". A function/lambda is paired with a "method" (Http method that is) and a pattern (the URL matching pattern).
Methods are many, but most common are probably get & post. They all have the same signature, let's look at one example:
```python
app.get("/", lambda res, req: res.end("Hello World!"))
```
```python
def home(res, req):
res.end("Hello World!")
app.get("/", home)
```
```python
async def home(res, req):
body = await res.get_json()
user = await get_user(body)
res.cork_end(f"Hello World! {user.name}")
app.post("/", home)
```
> Whenever your callback is a coroutine, such as the async/await, automatic corking can only happen in the very first portion of the coroutine (consider await a separator which essentially cuts the coroutine into smaller segments). Only the first "segment" of the coroutine will be called from socketify, the following async segments will be called by the asyncio event loop at a later point in time and will thus not be under our control with default corking enabled, HttpRequest object being stack-allocated and only valid in one single callback invocation so only valid in the first "segment" before the first await. If you just want to preserve headers, url, method, cookies and query string you can use `req.preserve()` to copy all data and keep it in the request object, but will be some performance penality. Take a look in [Corking](corking.md) for get a more in deph information
## Pattern matching
Routes are matched in order of specificity, not by the order you register them:
- Highest priority - static routes, think "/hello/this/is/static".
- Middle priority - parameter routes, think "/candy/:kind", where value of :kind is retrieved by `req.get_parameter(0)`.
- Lowest priority - wildcard routes, think "/hello/*".
In other words, the more specific a route is, the earlier it will match. This allows you to define wildcard routes that match a wide range of URLs and then "carve" out more specific behavior from that.
"any" routes, those who match any HTTP method, will match with lower priority than routes which specify their specific HTTP method (such as GET) if and only if the two routes otherwise are equally specific.
## Skipping to the next Route
If you want to tell to the router to go to the next route, you can call `req.set_yield(1)`
Example
```python
def user(res, req):
try:
if int(req.get_parameter(0)) == 1:
return res.end("Hello user 1!")
finally:
# invalid user tells to go, to the next route valid route (not found)
req.set_yield(1)
def not_found(res, req):
res.write_status(404).end("Not Found")
app.get("/", home)
app.get("/user/:user_id", user)
app.any("/*", not_found)
```
## Error handler
In case of some uncaught exceptions we will always try our best to call the error handler, you can set the handler using `app.set_error_handler(hanlder)`
```python
def xablau(res, req):
raise RuntimeError("Xablau!")
async def async_xablau(res, req):
await asyncio.sleep(1)
raise RuntimeError("Async Xablau!")
# this can be async no problems
def on_error(error, res, req):
# here you can log properly the error and do a pretty response to your clients
print("Somethind goes %s" % str(error))
# response and request can be None if the error is in an async function
if res != None:
# if response exists try to send something
res.write_status(500)
res.end("Sorry we did something wrong")
app.get("/", xablau)
app.get("/async", async_xablau)
app.set_error_handler(on_error)
```
## Proxies
We implement `Proxy Protocol v2` so you can use `res.get_proxied_remote_address()` to get the proxied IP.
```python
from socketify import App
def home(res, req):
res.write("<html><h1>")
res.write("Your proxied IP is: %s" % res.get_proxied_remote_address())
res.write("</h1><h1>")
res.write("Your IP as seen by the origin server is: %s" % res.get_remote_address())
res.end("</h1></html>")
app = App()
app.get("/*", home)
app.listen(
3000,
lambda config: print("Listening on port http://localhost:%d now\n" % config.port),
)
app.run()
```
### Per-SNI HttpRouter Support
```python
def default(res, req):
res.end("Hello from catch-all context!")
app.get("/*", default)
# Following is the context for *.google.* domain
# PS: options are optional if you are not using SSL
app.add_server_name("*.google.*", AppOptions(key_file_name="./misc/key.pem", cert_file_name="./misc/cert.pem", passphrase="1234"))
def google(res, req):
res.end("Hello from *.google.* context!")
app.domain("*.google.*").get("/*", google)
#you can also remove an server name
app.remove_server_name("*.google.*")
```

1
docs/ssl.md 100644
Wyświetl plik

@ -0,0 +1 @@
Support is already there, docs Comming soon...

Wyświetl plik

@ -0,0 +1,30 @@
## Sending Files and Serving Static Files
`app.static(route, path)` will serve all files in the directory as static, and will add byte range, 304, 404 support.
If you want to send a single file you can use `sendfile` helper for this.
Example:
```python
from socketify import App, sendfile
app = App()
# send home page index.html
async def home(res, req):
# sends the whole file with 304 and bytes range support
await sendfile(res, req, "./public/index.html")
app.get("/", home)
# serve all files in public folder under /* route (you can use any route like /assets)
app.static("/", "./public")
app.listen(
3000,
lambda config: print("Listening on port http://localhost:%d now\n" % config.port),
)
app.run()
```

Wyświetl plik

@ -0,0 +1,64 @@
# Streaming data
You should never call res.end(huge buffer). res.end guarantees sending so backpressure will probably spike. Instead you should use res.try_end to stream huge data part by part. Use in combination with res.on_writable and res.on_aborted callbacks.
For simplicity, you can use `res.send_chunk`, this will return an Future and use `res.on_writable` and `res.on_aborted` for you.
Using send_chunk:
```python
async def home(res, req):
res.write_header("Content-Type", "audio/mpeg")
filename = "./file_example_MP3_5MG.mp3"
total = os.stat(filename).st_size
async with aiofiles.open(filename, "rb") as fd:
while not res.aborted:
buffer = await fd.read(16384) #16kb buffer
(ok, done) = await res.send_chunk(buffer, total)
if not ok or done: #if cannot send probably aborted
break
```
If you want to understand `res.send_chunk`, check out the implementation:
```python
def send_chunk(self, buffer, total_size):
self._chunkFuture = self.loop.create_future()
self._lastChunkOffset = 0
def is_aborted(self):
self.aborted = True
try:
if not self._chunkFuture.done():
self._chunkFuture.set_result(
(False, True)
) # if aborted set to done True and ok False
except:
pass
def on_writeble(self, offset):
# Here the timeout is off, we can spend as much time before calling try_end we want to
(ok, done) = self.try_end(
buffer[offset - self._lastChunkOffset : :], total_size
)
if ok:
self._chunkFuture.set_result((ok, done))
return ok
self.on_writable(on_writeble)
self.on_aborted(is_aborted)
if self.aborted:
self._chunkFuture.set_result(
(False, True)
) # if aborted set to done True and ok False
return self._chunkFuture
(ok, done) = self.try_end(buffer, total_size)
if ok:
self._chunkFuture.set_result((ok, done))
return self._chunkFuture
# failed to send chunk
self._lastChunkOffset = self.get_write_offset()
return self._chunkFuture
```

74
docs/templates.md 100644
Wyświetl plik

@ -0,0 +1,74 @@
## Template Engines
Is very easy to add support to Template Engines, we already add `Mako` and `Jinja2` in /src/examples/helpers/templates.py.
### Implementation of Template extension:
```python
# Simple example of mako and jinja2 template plugin for socketify.py
from mako.template import Template
from mako.lookup import TemplateLookup
from mako import exceptions
from jinja2 import Environment, FileSystemLoader
class Jinja2Template:
def __init__(self, searchpath, encoding="utf-8", followlinks=False):
self.env = Environment(
loader=FileSystemLoader(searchpath, encoding, followlinks)
)
# You can also add caching and logging strategy here if you want ;)
def render(self, templatename, **kwargs):
try:
template = self.env.get_template(templatename)
return template.render(**kwargs)
except Exception as err:
return str(err)
class MakoTemplate:
def __init__(self, **options):
self.lookup = TemplateLookup(**options)
# You can also add caching and logging strategy here if you want ;)
def render(self, templatename, **kwargs):
try:
template = self.lookup.get_template(templatename)
return template.render(**kwargs)
except Exception as err:
return exceptions.html_error_template().render()
```
### Using templates
`app.template(instance)` will register the Template extension and call it when you use `res.render(...)`
```python
from socketify import App
# see helper/templates.py for plugin implementation
from helpers.templates import MakoTemplate
app = App()
# register templates
app.template(
MakoTemplate(
directories=["./templates"], output_encoding="utf-8", encoding_errors="replace"
)
)
def home(res, req):
res.render("mako_home.html", message="Hello, World")
app.get("/", home)
app.listen(
3000,
lambda config: print(
"Listening on port http://localhost:%s now\n" % str(config.port)
),
)
app.run()
```

Wyświetl plik

@ -0,0 +1,92 @@
## Uploading or Getting the Posting data
### Manually getting the chunks
Using `res.on_data` you can grab any chunk sended to the request
```python
def upload(res, req):
def on_data(res, chunk, is_end):
print(f"Got chunk of data with length {len(chunk)}, is_end: {is_end}")
if is_end:
res.cork_end("Thanks for the data!")
res.on_data(on_data)
```
### Getting it in an single call
We created an `res.get_data()` to get all data at once internally will create a list of bytes chunks for you.
```python
async def upload_chunks(res, req):
print(f"Posted to {req.get_url()}")
# await all the data, returns received chunks if fail (most likely fail is aborted requests)
data = await res.get_data()
print(f"Got chunks {len(data)} of data!")
# We respond when we are done
res.cork_end("Thanks for the data!")
```
### Getting utf-8/encoded data
Similar to `res.get_data()`, `res.get_text(encoding="utf-8")` will decode as text with the encoding you want.
```python
async def upload_text(res, req):
print(f"Posted to {req.get_url()}")
# await all the data and decode as text, returns None if fail
text = await res.get_text() # first parameter is the encoding (default utf-8)
print(f"Your text is ${text}")
# We respond when we are done
res.cork_end("Thanks for the data!")
```
### Getting JSON data
Similar to `res.get_data()`, `res.get_json()` will decode the json as dict.
```python
async def upload_json(res, req):
print(f"Posted to {req.get_url()}")
# await all the data and parses as json, returns None if fail
info = await res.get_json()
print(info)
# We respond when we are done
res.cork_end("Thanks for the data!")
```
### Getting application/x-www-form-urlencoded data
Similar to `res.get_data()`, `res.get_form_urlencoded(encoding="utf-8")` will decode the application/x-www-form-urlencoded as dict.
```python
async def upload_urlencoded(res, req):
print(f"Posted to {req.get_url()}")
# await all the data and decode as application/x-www-form-urlencoded, returns None if fails
form = await res.get_form_urlencoded()# first parameter is the encoding (default utf-8)
print(f"Your form is ${form}")
# We respond when we are done
res.cork_end("Thanks for the data!")
```
### Dynamic check content-type
You always can check the header Content-Type to dynamic check and convert in multiple formats.
```python
async def upload_multiple(res, req):
print(f"Posted to {req.get_url()}")
content_type = req.get_header("content-type")
# we can check the Content-Type to accept multiple formats
if content_type == "application/json":
data = await res.get_json()
elif content_type == "application/x-www-form-urlencoded":
data = await res.get_form_urlencoded()
else:
data = await res.get_text()
print(f"Your data is ${data}")
# We respond when we are done
res.cork_end("Thanks for the data!")
```

Wyświetl plik

@ -0,0 +1,51 @@
## The App.ws route
WebSocket "routes" are registered similarly, but not identically.
Every websocket route has the same pattern and pattern matching as for Http, but instead of one single callback you have a whole set of them, here's an example:
```python
app = App()
app.ws(
"/*",
{
"compression": CompressOptions.SHARED_COMPRESSOR,
"max_payload_length": 16 * 1024 * 1024,
"idle_timeout": 12,
"open": ws_open,
"message": ws_message,
"drain": lambda ws: print(
"WebSocket backpressure: %s", ws.get_buffered_amount()
),
"close": lambda ws, code, message: print("WebSocket closed"),
},
)
```
## Use the WebSocket.get_user_data() feature
You should use the provided user data feature to store and attach any per-socket user data. Going from user data to WebSocket is possible if you make your user data hold the WebSocket, and hook things up in the WebSocket open handler. Your user data memory is valid while your WebSocket is.
If you want to create something more elaborate you could have the user data hold a pointer to some dynamically allocated memory block that keeps a boolean whether the WebSocket is still valid or not. Sky is the limit here.
## WebSockets are valid from open to close
All given WebSocket are guaranteed to live from open event (where you got your WebSocket) until close event is called.
Message events will never emit outside of open/close. Calling ws.close or ws.end will immediately call the close handler.
## Backpressure in Websockets
Similarly to for Http, methods such as ws.send(...) can cause backpressure. Make sure to check ws.get_buffered_amount() before sending, and check the return value of ws.send before sending any more data. WebSockets do not have .onWritable, but instead make use of the .drain handler of the websocket route handler.
Inside of .drain event you should check ws.get_buffered_amount(), it might have drained, or even increased. Most likely drained but don't assume that it has, .drain event is only a hint that it has changed.
## Backpressure
Sending on a WebSocket can build backpressure. ws.send returns an enum of BACKPRESSURE, SUCCESS or DROPPED. When send returns BACKPRESSURE it means you should stop sending data until the drain event fires and ws.get_buffered_amount() returns a reasonable amount of bytes. But in case you specified a max_backpressure when creating the WebSocketContext, this limit will automatically be enforced. That means an attempt at sending a message which would result in too much backpressure will be canceled and send will return DROPPED. This means the message was dropped and will not be put in the queue. max_backpressure is an essential setting when using pub/sub as a slow receiver otherwise could build up a lot of backpressure. By setting max_backpressure the library will automatically manage an enforce a maximum allowed backpressure per socket for you.
## Ping/pongs "heartbeats"
The library will automatically send pings to clients according to the idle_timeout specified. If you set idle_timeout = 120 seconds a ping will go out a few seconds before this timeout unless the client has sent something to the server recently. If the client responds to the ping, the socket will stay open. When client fails to respond in time, the socket will be forcefully closed and the close event will trigger. On disconnect all resources are freed, including subscriptions to topics and any backpressure. You can easily let the browser reconnect using 3-lines-or-so of JavaScript if you want to.
## Settings
Compression (permessage-deflate) has three main modes; CompressOptions.DISABLED, CompressOptions.SHARED_COMPRESSOR and any of the CompressOptions.DEDICATED_COMPRESSOR_xKB. Disabled and shared options require no memory, while dedicated compressor requires the amount of memory you selected. For instance, CompressOptions.DEDICATED_COMPRESSOR_4KB adds an overhead of 4KB per WebSocket while uCompressOptions.DEDICATED_COMPRESSOR_256KB adds - you guessed it - 256KB!
Compressing using shared means that every WebSocket message is an isolated compression stream, it does not have a sliding compression window, kept between multiple send calls like the dedicated variants do.
You probably want shared compressor if dealing with larger JSON messages, or 4kb dedicated compressor if dealing with smaller JSON messages and if doing binary messaging you probably want to disable it completely.
idle_timeout is roughly the amount of seconds that may pass between messages. Being idle for more than this, and the connection is severed. This means you should make your clients send small ping messages every now and then, to keep the connection alive. The server will automatically send pings in case it needs to.