This allows user code that inherits from uio.IOBase to return an errno
error code from the user readinto/write function, by returning a negative
value. Eg returning -123 means an errno of 123. This is already how the
custom ioctl works.
A user class derived from IOBase and implementing readinto/write/ioctl can
now be used anywhere a native stream object is accepted.
The mapping from C to Python is:
stream_p->read --> readinto(buf)
stream_p->write --> write(buf)
stream_p->ioctl --> ioctl(request, arg)
Among other things it allows the user to:
- create an object which can be passed as the file argument to print:
print(..., file=myobj), and then print will pass all the data to the
object via the objects write method (same as CPython)
- pass a user object to uio.BufferedWriter to buffer the writes (same as
CPython)
- use select.select on a user object
- register user objects with select.poll, in particular so user objects can
be used with uasyncio
- create user files that can be returned from user filesystems, and import
can import scripts from these user files
For example:
class MyOut(io.IOBase):
def write(self, buf):
print('write', repr(buf))
return len(buf)
print('hello', file=MyOut())
The feature is enabled via MICROPY_PY_IO_IOBASE which is disabled by
default.
This patch simplifies the str creation API to favour the common case of
creating a str object that is not forced to be interned. To force
interning of a new str the new mp_obj_new_str_via_qstr function is added,
and should only be used if warranted.
Apart from simplifying the mp_obj_new_str function (and making it have the
same signature as mp_obj_new_bytes), this patch also reduces code size by a
bit (-16 bytes for bare-arm and roughly -40 bytes on the bare-metal archs).
The with semantics of this function is close to
pkg_resources.resource_stream() function from setuptools, which
is the canonical way to access non-source files belonging to a package
(resources), regardless of what medium the package uses (e.g. individual
source files vs zip archive). In the case of MicroPython, this function
allows to access resources which are frozen into the executable, besides
accessing resources in the file system.
This is initial stage of the implementation, which actually doesn't
implement "package" part of the semantics, just accesses frozen resources
from "root", or filesystem resource - from current dir.
Both read and write operations support variants where either a) a single
call is made to the undelying stream implementation and returned buffer
length may be less than requested, or b) calls are repeated until requested
amount of data is collected, shorter amount is returned only in case of
EOF or error.
These operations are available from the level of C support functions to be
used by other C modules to implementations of Python methods to be used in
user-facing objects.
The rationale of these changes is to allow to write concise and robust
code to work with *blocking* streams of types prone to short reads, like
serial interfaces and sockets. Particular object types may select "exact"
vs "once" types of methods depending on their needs. E.g., for sockets,
revc() and send() methods continue to be "once", while read() and write()
thus converted to "exactly" versions.
These changes don't affect non-blocking handling, e.g. trying "exact"
method on the non-blocking socket will return as much data as available
without blocking. No data available is continued to be signaled as None
return value to read() and write().
From the point of view of CPython compatibility, this model is a cross
between its io.RawIOBase and io.BufferedIOBase abstract classes. For
blocking streams, it works as io.BufferedIOBase model (guaranteeing
lack of short reads/writes), while for non-blocking - as io.RawIOBase,
returning None in case of lack of data (instead of raising expensive
exception, as required by io.BufferedIOBase). Such a cross-behavior
should be optimal for MicroPython needs.
Functionality we provide in builtin io module is fairly minimal. Some
code, including CPython stdlib, depends on more functionality. So, there's
a choice to either implement it in C, or move it _io, and let implement other
functionality in Python. 2nd choice is pursued. This setup matches CPython
too (_io is builtin, io is Python-level).
io.FileIO is binary I/O, ans actually optional. Default file type is
io.TextIOWrapper, which provides str results. CPython3 explicitly describes
io.TextIOWrapper as buffered I/O, but we don't have buffering support yet
anyway.
Blanket wide to all .c and .h files. Some files originating from ST are
difficult to deal with (license wise) so it was left out of those.
Also merged modpyb.h, modos.h, modstm.h and modtime.h in stmhal/.