Previous to this patch, for large chunks of bytecode that originated from
a single source-code line, the bytecode-line mapping would generate
something like (for 42 bytecode bytes and 1 line):
BC_SKIP=31 LINE_SKIP=1
BC_SKIP=11 LINE_SKIP=0
This would mean that any errors in the last 11 bytecode bytes would be
reported on the following line. This patch fixes it to generate instead:
BC_SKIP=31 LINE_SKIP=0
BC_SKIP=11 LINE_SKIP=1
When an exception is raised and is to be handled by the VM, it is stored
on the Python value stack so the bytecode can access it. CPython stores
3 objects on the stack for each exception: exc type, exc instance and
traceback. uPy followed this approach, but it turns out not to be
necessary. Instead, it is enough to store just the exception instance on
the Python value stack. The only place where the 3 values are needed
explicitly is for the __exit__ handler of a with-statement context, but
for these cases the 3 values can be extracted from the single exception
instance.
This patch removes the need to store 3 values on the stack, and instead
just stores the exception instance.
Code size is reduced by about 50-100 bytes, the compiler and VM are
slightly simpler, generate bytecode is smaller (by 2 bytes for each try
block), and the Python value stack is reduced in size for functions that
handle exceptions.
With the previous patch combining 3 emit functions into 1, it now makes
sense to also combine the corresponding VM opcodes, which is what this
patch does. This eliminates 2 opcodes which simplifies the VM and reduces
code size, in bytes: bare-arm:44, minimal:64, unix(NDEBUG,x86-64):272,
stmhal:92, esp8266:200. Profiling (with a simple script that creates many
list/dict/set comprehensions) shows no measurable change in performance.
The 3 kinds of comprehensions are similar enough that merging their emit
functions reduces code size. Decreases in code size in bytes are:
bare-arm:24, minimal:96, unix(NDEBUG,x86-64):328, stmhal:80, esp8266:76.
This new compile-time option allows to make the bytecode compiler
configurable at runtime by setting the fields in the mp_dynamic_compiler
structure. By using this feature, the compiler can generate bytecode
that targets any MicroPython runtime/VM, regardless of the host and
target compile-time settings.
Options so far that fall under this dynamic setting are:
- maximum number of bits that a small int can hold;
- whether caching of lookups is used in the bytecode;
- whether to use unicode strings or not (lexer behaviour differs, and
therefore generated string constants differ).
MICROPY_ENABLE_COMPILER can be used to enable/disable the entire compiler,
which is useful when only loading of pre-compiled bytecode is supported.
It is enabled by default.
MICROPY_PY_BUILTINS_EVAL_EXEC controls support of eval and exec builtin
functions. By default they are only included if MICROPY_ENABLE_COMPILER
is enabled.
Disabling both options saves about 40k of code size on 32-bit x86.
Fixes#1684 and makes "not" match Python semantics. The code is also
simplified (the separate MP_BC_NOT opcode is removed) and the patch saves
68 bytes for bare-arm/ and 52 bytes for minimal/.
Previously "not x" was implemented as !mp_unary_op(x, MP_UNARY_OP_BOOL),
so any given object only needs to implement MP_UNARY_OP_BOOL (and the VM
had a special opcode to do the ! bit).
With this patch "not x" is implemented as mp_unary_op(x, MP_UNARY_OP_NOT),
but this operation is caught at the start of mp_unary_op and dispatched as
!mp_obj_is_true(x). mp_obj_is_true has special logic to test for
truthness, and is the correct way to handle the not operation.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.
This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
MICROPY_PERSISTENT_CODE must be enabled, and then enabling
MICROPY_PERSISTENT_CODE_LOAD/SAVE (either or both) will allow loading
and/or saving of code (at the moment just bytecode) from/to a .mpy file.
Main changes when MICROPY_PERSISTENT_CODE is enabled are:
- qstrs are encoded as 2-byte fixed width in the bytecode
- all pointers are removed from bytecode and put in const_table (this
includes const objects and raw code pointers)
Ultimately this option will enable persistence for not just bytecode but
also native code.
unix-cpy was originally written to get semantic equivalent with CPython
without writing functional tests. When writing the initial
implementation of uPy it was a long way between lexer and functional
tests, so the half-way test was to make sure that the bytecode was
correct. The idea was that if the uPy bytecode matched CPython 1-1 then
uPy would be proper Python if the bytecodes acted correctly. And having
matching bytecode meant that it was less likely to miss some deep
subtlety in the Python semantics that would require an architectural
change later on.
But that is all history and it no longer makes sense to retain the
ability to output CPython bytecode, because:
1. It outputs CPython 3.3 compatible bytecode. CPython's bytecode
changes from version to version, and seems to have changed quite a bit
in 3.5. There's no point in changing the bytecode output to match
CPython anymore.
2. uPy and CPy do different optimisations to the bytecode which makes it
harder to match.
3. The bytecode tests are not run. They were never part of Travis and
are not run locally anymore.
4. The EMIT_CPYTHON option needs a lot of extra source code which adds
heaps of noise, especially in compile.c.
5. Now that there is an extensive test suite (which tests functionality)
there is no need to match the bytecode. Some very subtle behaviour is
tested with the test suite and passing these tests is a much better
way to stay Python-language compliant, rather than trying to match
CPy bytecode.
Previous to this patch each time a bytes object was referenced a new
instance (with the same data) was created. With this patch a single
bytes object is created in the compiler and is loaded directly at execute
time as a true constant (similar to loading bignum and float objects).
This saves on allocating RAM and means that bytes objects can now be
used when the memory manager is locked (eg in interrupts).
The MP_BC_LOAD_CONST_BYTES bytecode was removed as part of this.
Generated bytecode is slightly larger due to storing a pointer to the
bytes object instead of the qstr identifier.
Code size is reduced by about 60 bytes on Thumb2 architectures.
Before this patch a "with" block needed to create a bound method object
on the heap for the __exit__ call. Now it doesn't because we use
load_method instead of load_attr, and save the method+self on the stack.
When just the bytecode emitter is needed there is no need to have a
dynamic method table for the emitter back-end, and we can instead
directly call the mp_emit_bc_XXX functions. This gives a significant
reduction in code size and a very slight performance boost for the
compiler.
This patch saves 1160 bytes code on Thumb2 and 972 bytes on x86, when
native emitters are disabled.
Overall savings in code over the last 3 commits are:
bare-arm: 1664 bytes.
minimal: 2136 bytes.
stmhal: 584 bytes (it has native emitter enabled).
cc3200: 1736 bytes.
First pass for the compiler is computing the scope (eg if an identifier
is local or not) and originally had an entire table of methods dedicated
to this, most of which did nothing. With changes from previous commit,
this set of methods can be removed and the methods from the bytecode
emitter used instead, with very little modification -- this is what is
done in this commit.
This factoring has little to no impact on the speed of the compiler
(tested by compiling 3763 Python scripts and timing it).
This factoring reduces code size by about 270-300 bytes on Thumb2 archs,
and 400 bytes on x86.
Previous to this patch, a big-int, float or imag constant was interned
(made into a qstr) and then parsed at runtime to create an object each
time it was needed. This is wasteful in RAM and not efficient. Now,
these constants are parsed straight away in the parser and turned into
objects. This allows constants with large numbers of digits (so
addresses issue #1103) and takes us a step closer to #722.
This is a simple optimisation inspired by JITing technology: we cache in
the bytecode (using 1 byte) the offset of the last successful lookup in
a map. This allows us next time round to check in that location in the
hash table (mp_map_t) for the desired entry, and if it's there use that
entry straight away. Otherwise fallback to a normal map lookup.
Works for LOAD_NAME, LOAD_GLOBAL, LOAD_ATTR and STORE_ATTR opcodes.
On a few tests it gives >90% cache hit and greatly improves speed of
code.
Disabled by default. Enabled for unix and stmhal ports.
This patch consolidates all global variables in py/ core into one place,
in a global structure. Root pointers are all located together to make
GC tracing easier and more efficient.
This patch makes the MICROPY_PY_BUILTINS_SLICE compile-time option
fully disable the builtin slice operation (when set to 0). This
includes removing the slice sytanx from the grammar. Now, enabling
slice costs 4228 bytes on unix x64, and 1816 bytes on stmhal.
This patch makes MICROPY_PY_BUILTINS_SET compile-time option fully
disable the builtin set object (when set to 0). This includes removing
set constructor/comprehension from the grammar, the compiler and the
emitters. Now, enabling set costs 8168 bytes on unix x64, and 3576
bytes on stmhal.
There is a lot potential in compress bytecodes and make more use of the
coding space. This patch introduces "multi" bytecodes which have their
argument included in the bytecode (by addition).
UNARY_OP and BINARY_OP now no longer take a 1 byte argument for the
opcode. Rather, the opcode is included in the first byte itself.
LOAD_FAST_[0,1,2] and STORE_FAST_[0,1,2] are removed in favour of their
multi versions, which can take an argument between 0 and 15 inclusive.
The majority of LOAD_FAST/STORE_FAST codes fit in this range and so this
saves a byte for each of these.
LOAD_CONST_SMALL_INT_MULTI is used to load small ints between -16 and 47
inclusive. Such ints are quite common and now only need 1 byte to
store, and now have much faster decoding.
In all this patch saves about 2% RAM for typically bytecode (1.8% on
64-bit test, 2.5% on pyboard test). It also reduces the binary size
(because bytecodes are simplified) and doesn't harm performance.
This saves a lot of RAM for 2 reasons:
1. For functions that don't have default values, var args or var kw
args (which is a large number of functions in the general case), the
mp_obj_fun_bc_t type now fits in 1 GC block (previously needed 2 because
of the extra pointer to point to the arg_names array). So this saves 16
bytes per function (32 bytes on 64-bit machines).
2. Combining separate memory regions generally saves RAM because the
unused bytes at the end of the GC block are saved for 1 of the blocks
(since that block doesn't exist on its own anymore). So generally this
saves 8 bytes per function.
Tested by importing lots of modules:
- 64-bit Linux gave about an 8% RAM saving for 86k of used RAM.
- pyboard gave about a 6% RAM saving for 31k of used RAM.
Code-info size, block name, source name, n_state and n_exc_stack now use
variable length encoded uints. This saves 7-9 bytes per bytecode
function for most functions.
Reduces by about a factor of 10 on average the amount of RAM needed to
store the line-number to bytecode map in the bytecode prelude.
Using CPython3.4's stdlib for statistics: previously, an average of
13 bytes were used per (bytecode offset, line-number offset) pair, and
now with this improvement, that's down to 1.3 bytes on average.
Large RAM usage before was due to some very large steps in line numbers,
both from the start of the first line in a function way down in the
file, and also functions that have big comments and/or big strings in
them (both cases were significant).
Although the savings are large on average for the CPython stdlib, it
won't have such a big effect for small scripts used in embedded
programming.
Addresses issue #648.
dummy_data field is accessed as uint value (e.g.
in emit_write_bytecode_byte_ptr), but is not aligned as such, which causes
bus errors or incorrect behavior on any arch requiring strictly aligned
data (ARM pre-v7, MIPS, etc, etc).
Native emitter can now compile try/except blocks using nlr_push/nlr_pop.
It probably only works for 1 level of exception handling. It doesn't
work on Thumb (only x64).
Native emitter can also handle some additional op codes.
With this patch, 198 tests now pass using "-X emit=native" option to
micropython.
Needed to pop the iterator object when breaking out of a for loop. Need
also to be careful to unwind exception handler before popping iterator.
Addresses issue #635.
Blanket wide to all .c and .h files. Some files originating from ST are
difficult to deal with (license wise) so it was left out of those.
Also merged modpyb.h, modos.h, modstm.h and modtime.h in stmhal/.
3 emitter functions are needed only for emitcpy, and so we can #if them
out when compiling with emitcpy support.
Also remove unused SETUP_LOOP bytecode.
Closed over variables are now passed on the stack, instead of creating a
tuple and passing that. This way memory for the closed over variables
can be allocated within the closure object itself. See issue #510 for
background.
Attempt to address issue #386. unique_code_id's have been removed and
replaced with a pointer to the "raw code" information. This pointer is
stored in the actual byte code (aligned, so the GC can trace it), so
that raw code (ie byte code, native code and inline assembler) is kept
only for as long as it is needed. In memory it's now like a tree: the
outer module's byte code points directly to its children's raw code. So
when the outer code gets freed, if there are no remaining functions that
need the raw code, then the children's code gets freed as well.
This is pretty much like CPython does it, except that CPython stores
indexes in the byte code rather than machine pointers. These indices
index the per-function constant table in order to find the relevant
code.
This is necessary to catch all cases where locals are referenced before
assignment. We still keep the _0, _1, _2 versions of LOAD_FAST to help
reduced the byte code size in RAM.
Addresses issue #457.
This simplifies the compiler a little, since now it can do 1 pass over
a function declaration, to determine default arguments. I would have
done this originally, but CPython 3.3 somehow had the default keyword
args compiled before the default position args (even though they appear
in the other order in the text of the script), and I thought it was
important to have the same order of execution when evaluating default
arguments. CPython 3.4 has changed the order to the more obvious one,
so we can also change.
Very little has changed. In Python 3.4 they removed the opcode
STORE_LOCALS, but in Micro Python we only ever used this for CPython
compatibility, so it was a trivial thing to remove. It also allowed to
clean up some dead code (eg the 0xdeadbeef in class construction), and
now class builders use 1 less stack word.
Python 3.4.0 introduced the LOAD_CLASSDEREF opcode, which I have not
yet understood. Still, all tests (apart from bytecode test) still pass.
Bytecode tests needs some more attention, but they are not that
important anymore.
Adding this bytecode allows to remove 4 others related to
function/method calls with * and ** support. Will also help with
bytecodes that make functions/closures with default positional and
keyword args.
Mostly just a global search and replace. Except rt_is_true which
becomes mp_obj_is_true.
Still would like to tidy up some of the names, but this will do for now.
Rationale: setting up the stack (state for locals and exceptions) is
really part of the "code", it's the prelude of the function. For
example, native code adjusts the stack pointer on entry to the function.
Native code doesn't need to know n_state for any other reason. So
putting the state size in the bytecode prelude is sensible.
It reduced ROM usage on STM by about 30 bytes :) And makes it easier to
pass information about the bytecode between functions.
Assuming we have truncating (floor) division, way to do ceiling division
by N is to use formula (x + (N-1)) / N. Specifically, 63 bits, if stored
7 bits per byte, require exactly 9 bytes. 64 bits overflow that and require
10 bytes.