It performs handshake in blocking manner, hopes that writes
work without short writes, and hopes that non-blocking read
is implemented properly by ussl module (there're known issues
with axTLS module for example).
Currently executed task is a top-level coroutine scheduled in the event
loop (note that sub-coroutines aren't scheduled in the event loop and
are executed implicitly by yield from/await, driven by top-level coro).
API is not stable and will guaranteedly change, likely completely.
Currently, this tries to folow the same idea as TCP open_connection()
call, but that doesn't make much sense, so that will likely change.
The idea behind this implementation is that getrandbits() is guaranteed
(required) to be equally distributed. Thus, we can just repeatedly
sample it until get a suitable value, there's no bias accumulated and
the process should be finite (and on average take few iterations).
Poll socket is what's passed to uselect.poll(), while I/O socket is what's
used for .read(). This is a workaround of the issue that MicroPython doesn't
support proxying poll functionality for stream wrappers (like SSL, websocket,
etc.)
This issue is tracked as https://github.com/micropython/micropython/issues/3394
It may be that it's more efficient to apply such a workaround on uasyncio
level rather than implementing full solution of uPy side.
POLLHUP/POLERR may be returned anytime (per POSIX, these flags aren't
even valid in input flags, they just appear in output flags). Subsequent
I/O operation on stream will lead to exception. If an application
doesn't do proper exception handling, the stream won't be closed, and
following calls will return POLLHUP/POLLERR status again (infinitely).
So, proactively unregister such a stream.
This change is questionable, because apps should handle errors properly
and close the stream in such case (or it will be leaked), and closing
will remove the stream from poller too.
But again, if that's not done, it may lead to cascade of adverse effects,
e.g. after eef054d98, benchmark/test_http_server_heavy.py regressed and
started and started to throw utimeq queue overflow exceptions. The story
behind it is: Boom benchmarker does an initial probe request to the app
under test which it apparently doen't handle properly, leading to
EPIPE/ECONNRESET on the side of the test app, the app didn't close the
socket, so each invocation to .wait() resulted in that socket being
returned with POLLHUP again and again. Given that after eef054d98, .wait()
is called on each even loop iteration, that create positive feedback in
the queue leading to it growing to overflow.